Professional Documents
Culture Documents
Administration Guide
AR100101-ACN-EN-1
OpenText Archive Server
Administration Guide
AR100101-ACN-EN-1
Rev.: 2011-May-16
This documentation has been created for software version 10.1.1.
It is also valid for subsequent software versions as long as no new document version is shipped with the product or is
published at https://knowledge.opentext.com.
Open Text Corporation
275 Frank Tompa Drive, Waterloo, Ontario, Canada, N2L 0A1
Tel: +1-519-888-7111
Toll Free Canada/USA: 1-800-499-6544 International: +800-4996-5440
Fax: +1-519-888-0677
Email: support@opentext.com
FTP: ftp://ftp.opentext.com
For more information, visit http://www.opentext.com
PRE Introduction 17
i About This Document............................................................................. 17
ii Further Information................................................................................. 18
iii Conventions ........................................................................................... 19
Part 1 Overview 21
Part 2 Configuration 43
ii Further Information
This manual This manual is available in PDF and HTML format and can be downloaded from the
OpenText Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/12331031). You can
print the PDF file if you prefer to read longer text on paper.
Online help For all administration clients (Administration Client, Archive Monitoring Web
Client, Document Pipeline Info and configuration properties), online help files are
available. You can open the online help via help menu, help button, or F1.
Other manuals In addition to this Administration Guide, use part 7 "Configuration Parameter
Reference" in OpenText Archive Server - Administration Help (AR-H-ACN) for a
reference of all configuration properties.
To learn about Document Pipelines and their usage in document import scenarios,
refer to the guide OpenText Document Pipelines - Overview and Import Interfaces (AR-
CDP).
OpenText Online (http://online.opentext.com/) is a single point of access for the
product information provided by OpenText. You can access the following support
sources through OpenText Online:
• Communities
• Knowledge Center
iii Conventions
User interface
This format is used for elements in the graphical user interface (GUI), such as
buttons, names of icons, menu items, and fields.
Filenames, commands, and sample data
This format is used for file names, paths, URLs, and commands at the command
prompt. It is also used for example data, text to be entered in text boxes, and
other literals.
Note: If you copy command line examples from a PDF, be aware that PDFs
can contain hidden characters. OpenText recommends copying from the
HTML version of the document, if it is available.
KEY NAMES
Key names appear in ALL CAPS, for example:
Press CTRL+V.
<Variable name>
Angled brackets < > are used to denote a variable or placeholder. The user
replaces the brackets and the descriptive content with the appropriate value. For
example, <server_name> becomes serv01.
Internal cross-references
Click the cross-reference to go directly to the reference target in the current
document.
External cross-references
External cross-references are usually text references to other documents.
However, if a document is available in HTML format, for example, in the
Knowledge Center, external references may be active links to a specific section in
the referenced document.
Warnings, notes, and tips
Caution
Cautions help you avoid irreversible problems. Read this information
carefully and follow all instructions.
Important
Important notes help you avoid major problems.
Applications
Application or services deliver documents or content to Archive Server using
Archive Services or Archive Link. Retrieval requests are also sent by applications to
get documents back from the Archive Server.
Archive Server
Archive Server incorporates the following components for storing, managing and
retrieving documents and data:
• Document Service (DS), handles the storage and retrieval of documents and
components.
• Storage Manager (STORM), manages and controls the storage devices.
• Administration Server, provides the interface to the Administration Client
which helps the administrator to create and maintain the environment of
Archive Servers, including logical archives, storage devices, pools, etc.
Administration Tools
To administer, configure and monitor the components mentioned above, you can
use the following tools:
• Administration Client is the tool to create logical archives and to perform most of
the administrative work like user management and monitoring. See also
“Important Directories on Archive Server” on page 25.
• Archive Monitoring Web Client is used to monitor information regarding the
status of relevant processes, the file system, the size of the database and available
Storage Devices
Various types of storage devices offered by leading storage vendors can be used by
Archive Server for long-time archiving. See “Storage Devices” on page 31.
1. Content is requested by a client. For this, the client sends the unique document
ID and archive ID to Archive Server.
2. Archive Server checks whether the content consists of more components and
where the components are stored.
3. If the content is still stored in the buffer or in the cache, it is delivered directly to
the client.
4. If the content is already archived on the storage device, Archive Server sends a
request to the storage device, gets the content and leads it forward to the
application. Content is returned in chunks, so the client does not have to wait
until the complete file is read. That is important for large files or if the client
only reads parts of a file.
• Buffer(s) and disk volumes to store incoming content temporarily; see also “Disk
Buffers” on page 31.
• Storage devices and storage volumes for long-time archiving of content; see also
“Installing and Configuring Storage Devices” on page 56.
• Cache to accelerate content retrieval. Only necessary if slow storage devices are
used; see also “Caches” on page 35.
• Retention period for content; see also “Retention” on page 69.
• Compression and encryption settings; see also “Data Compression” on page 66
and “Encrypted Document Storage” on page 106.
• Security settings and certificates; see also “Configuring the Archive Security
Settings” on page 79.
• An Archive Cache Server, if used; see also “Configuring Archive Cache Server”
on page 193.
See also:
• “Configuring Buffers” on page 47
• “Configuring Disk Volumes” on page 45
See also:
• “Installing and Configuring Storage Devices” on page 56
• “Pools and Pool Types” on page 33
• “Creating and Modifying Pools” on page 84
ISO images
• Very small files
• Same document type
• Same lifecycle
• Bulk deletion at the end of the lifecycle
See also:
• “Installing and Configuring Storage Devices” on page 56
• “Pools and Pool Types” on page 33
• “Creating and Modifying Pools” on page 84
See also:
• “Creating and Modifying Pools” on page 84
• “Installing and Configuring Storage Devices” on page 56
2.4.5 Caches
Caches are used to speed up the read access to documents. Archive Server can use
several caches: the disk buffer, the local cache volumes and an Archive Cache
Server. The local cache resides on the Archive Server and can be configured. The
local cache is recommended to accelerate retrieval actions especially with optical
storage devices. An Archive Cache Server is intended to reduce and speed up the
data transfer in a WAN. It is installed on its own host in a separate subnet.
See also:
• “Configuring Caches” on page 53
• “Configuring Disk Volumes” on page 45
• “Configuring Archive Cache Server” on page 193
2.5 Jobs
Jobs are recurrent tasks, which are automatically started according to a time
schedule or when certain conditions are met. This allows, for example, that
temporarily stored content is transferred automatically from the disk buffer to the
storage device. See also “Configuring Jobs and Checking Job Protocol” on page 95.
3.2.1 Infrastructure
Within this object, you configure the required infrastructure objects to enable the
usage with logical archives.
Buffers
Documents are collected in disk buffers before they are finally written to the
storage medium. To create disk buffers, see “Configuring Buffers” on page 47.
To get more information about buffer types, see “Disk Buffers” on page 31.
Caches
Caches are used to accelerate the read access to documents. To create caches, see
“Configuring Caches” on page 53.
Devices
Storage devices are used for long-time archiving. To configure storage devices,
see “Installing and Configuring Storage Devices” on page 56.
Disk Volumes
Disk volumes are used for buffers and pools. To configure disk volumes, see
“Configuring Disk Volumes” on page 45.
3.2.2 Archives
Within this object, you create logical archives and pools, you can define replicated
archives for remote standby scenarios and you can see external archives of known
servers.
Original Archives
Logical archives of the selected server. To create and modify archives, see
“Configuring Archives and Pools” on page 65.
Replicated Archives
Shows replicated archives; see “Logical Archives” on page 65.
External Archives
Shows external archives of known servers; see “Logical Archives” on page 65.
3.2.3 Environment
Within this object, you configure the environment of an Archive Server. For
example, Archive Cache Servers must first be configured in the environment if it
should be assigned to a logical archive.
Cache Servers
Cache servers can be used to accelerate content retrieval in a slow WAN. See
“Configuring Archive Cache Server” on page 193
Known Servers
Known servers are used for replicating archives in remote standby scenarios. See
“Adding and Modifying Known Servers” on page 177.
SAP Servers
The configuration of SAP gateways and systems to connect SAP servers to
Archive Server. See “Connecting to SAP Servers” on page 163.
Scan Stations
The configuration of scan stations and archive modes to connect scan stations to
Archive Server. See “Configuring Scan Stations” on page 169.
3.2.4 System
Within this object, you configure global settings for the Archive Server. You also
find all jobs and a collection of useful utilities.
Alerts
Displays alerts of the “Admin Client Alert” type. See “Checking Alerts” on
page 301. To receive alerts in the Administration Client, configure the events and
notifications appropriately. See, “Monitoring with Notifications” on page 293.
Events and Notifications
Events and notifications can be configured to get information on predefined
server events. See “Monitoring with Notifications” on page 293.
Jobs
Jobs are recurrent tasks which are automatically started according to a time
schedule or when certain conditions are met, e.g. to write content from the buffer
to the storage platform. A protocol allows the administrator to watch the
successful execution of jobs. See “Configuring Jobs and Checking Job Protocol”
on page 95.
Key Store
The certification store is used to administer encryption certificates, security keys
and timestamps. See “Configuring a Certificate for Document Encryption” on
page 125.
Policies
Policies are a combination of rights which can be assigned to user groups. See
“Checking, Creating and Modifying Policies” on page 156.
Reports
Reports contains the tabs "Reports" and "Scenarios" which display the generated
reports and available scenarios respectively. See “Generating Scenario Reports”
on page 209.
Storage Tiers
Storage tiers designate different types of storage. See “Creating and Modifying
Storage Tiers” on page 91.
Users and Groups
Administration of users and groups. See “Checking, Creating and Modifying
Users” on page 158 and “Checking, Creating and Modifying User Groups” on
page 159.
Utilities
Utilities are tools which are started interactively by the administrator; see
“Utilities” on page 251.
3.2.5 Configuration
Within this object, you can set the configuration variables for:
Archive Server
Shows configuration variables related to the Archive Server. This includes
Administration Server, database server, Document Service logging, Notification
Server, Archive Timestamp Server.
Monitor Server
Shows configuration variables related to the Archive Monitoring Server and Web
Client.
Document Pipeline
Shows configuration variables related to the document server.
For a description of how to set, modify, delete and search configuration variables,
see “Setting Configuration Variables” on page 211.
The Archive Spawner service must be able to access the path. You might
have to run the service under a dedicated user to achieve this. If you use a
drive letter, you will have to make sure that the drive is mapped at boot time
before the Spawner service is started and will not disconnect after being idle
for a while. For the latter reason it is recommended to use UNC paths and
not mapped network drives with drive letters.
Click Browse to open the directory browser. Select the designated directory
and click OK to confirm.
If you enter the directory path manually, ensure that a backslash is inserted
in front of the directory name if you are using volume letters (e.g., e:\vol2).
Volume class
Select the storage medium or storage system to ensure correct handling of
documents and their retention.
Hard Disk
Hard disk volume that provides WORM functionality or that can be used
as disk buffer. Documents are written from the buffer to the volume
without additional attributes. Use this volume class for buffers.
Hard Disk based read-only system
Local hard-disk volume read-only, documents are written from the buffer
to the volume and the read-only attribute is set.
Further supported storage vendors
For details on the other supported storage systems, see the Storage
Platform Release Notes in the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/Open/123310
31).
6. Click Finish.
Create as many hard-disk volumes as you need.
Renaming disk To rename a disk volume, select it in the result pane and click Rename in the action
volumes pane.
Note: If you want to rename a disk volume, make sure that an existing
replicated disk volume is also renamed. Then start the Synchronize_Replicates
job on the remote server. This will update the volume names on both servers.
Further steps:
• “Creating and Modifying a Disk Buffer” on page 48
• “Creating and Modifying a HDSK (Write-Through) Pool” on page 85
• “Creating and Modifying Pools with a Buffer” on page 85
• “Write Incremental (IXW) Pool Settings” on page 88
7. Schedule the Purge_Buffer job. The command and the arguments are entered
automatically and can be modified later. See “Setting the Start Mode and
Scheduling of Jobs” on page 100.
Modifying a disk To modify a disk buffer, select it and click Properties in the action pane. Proceed in
buffer the same way as when creating a disk buffer. The name of the disk buffer and the
Purge_Buffer job cannot be changed.
Deleting a disk To delete a disk buffer, select it and click Delete in the action pane. A disk buffer
buffer can only be deleted if it is not assigned to a pool.
See also:
• “Creating and Modifying Disk Volumes” on page 46
• “Creating and Modifying a Disk Buffer” on page 48
See also:
• “Creating and Modifying Jobs” on page 99.
• “Setting the Start Mode and Scheduling of Jobs” on page 100
7. Click OK.
To synchronize servers:
1. Select Buffers in the Infrastructure object or select Archives in the in the
console tree.
2. Click Synchronize Servers in the action pane.
3. Click OK to confirm. The synchronization is started.
Global cache
If no cache path is configured and assigned to a logical archive, the global cache is
used. The global cache is usually created during installation but there is no volume
assigned. To use the global cache a volume must be assigned. See “Adding Hard-
Disk Volumes to Caches” on page 54.
Depending on the time when you want to cache documents, you select the
appropriate configuration setting:
Enable caching for the Caching option in the archive configuration; see “Configuring
logical archive the Archive Settings” on page 80
Caching when the If the Write job is performed, documents are also written to the
document is written cache.
Caching when the buf- Cache documents before purging option in the disk buffer
fer is purged properties. See “Creating and Modifying a Disk Buffer” on
page 48.
See also:
• “Adding Hard-Disk Volumes to Caches” on page 54
• “Creating and Deleting Caches” on page 54
• “Defining Priorities of Cache Volumes” on page 56
To create a cache:
1. Create the volumes for the caches on the operating system level.
2. Start the Administration Client.
3. Select Caches in the Infrastructure object in the console tree.
4. Click New Cache in the action pane.
5. Enter the Cache name and click Next.
6. Enter the Location of the hard-disk volume.
7. Click Finish.
Note: If you want to change the priority of assigned hard-disk volumes, see
“Defining Priorities of Cache Volumes” on page 56.
Deleting a cache To delete a cache, select it and click Delete in the action pane. It is not possible to
delete a cache which is assigned to a logical archive. The global cache cannot be
deleted either.
See also:
• “Adding Hard-Disk Volumes to Caches” on page 54
• “Defining Priorities of Cache Volumes” on page 56
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
Note: If you want to change the priority of hard-disk volumes, see “Defining
Priorities of Cache Volumes” on page 56.
See also:
• “Configuring Caches” on page 53
• “Defining Priorities of Cache Volumes” on page 56
To delete a HD volume:
1. Select Caches in the Infrastructure object in the console tree.
2. Select the designated cache in the top area of the result pane. In the bottom area
of the result pane, the assigned hard-disk volumes are listed.
3. Select the hard-disk volume you want to delete.
4. Click Delete in the action pane.
5. Click OK to confirm.
Note: If you want to change the priority of hard-disk volumes, see “Defining
Priorities of Cache Volumes” on page 56.
See also:
• “Configuring Caches” on page 53
Caution
Be aware that your cache content gets invalid if you change the volume
priority.
Important
Although you can configure most storage systems for container file storage
as well as for single file storage, the configuration is completely different.
To create a volume:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the designated device in the top area of the result pane.
3. Click New Disk Volume in the action pane.
4. Enter settings:
Volume name
Unique name of the volume.
Base directory
Base directory, which was defined with storage system with system-specific
tools, during installation.
5. Click Finish to create the new volume.
To attach a device:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the designated device in the top area of the result pane.
3. Click Attach in the action pane.
To detach a device:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the designated device in the top area of the result pane.
3. Click Detach in the action pane.
This device can no longer be accessed and can be turned off. The status is set to
“Detached”.
Tip: Label blank media – if necessary – before inserting them in the jukebox,
label backup media as well.
To insert a volume:
1. Insert the medium into the jukebox.
2. Select Devices in the Infrastructure object in the console tree.
3. Select the jukebox where you inserted the medium in the top area of the result
pane.
4. Click Insert Volume in the action pane.
The new volume is listed in the bottom area of the result pane.
The status is -blank- .
To test slots:
1. Select Devices in the Infrastructure object in the console tree. All available
devices are listed in the top area of the result pane.
2. Select the designated jukebox. The attached volumes are listed in the bottom
area of the result pane.
3. Click Test Slots in the action pane.
4. Enter the numbers of the slots to be tested.
Use the following entry syntax:
7 Specifies slot 7
3,6,40 Specifies slots 3, 6, and 40.
3–7 Specifies slots 3 to 7 inclusive
2,20-45 Specifies slot 2 and slots 20 to 45 inclusive
5. Click OK.
A protocol window shows the progress and the result of the slot test. To check
the protocol later on, see “Checking Utilities Protocols” on page 252.
Caution
Under Windows, writing signatures to media with the Windows Disk
Manager is not allowed. These signatures make the medium unreadable for
the archive.
Details:
• “Write Incremental (IXW) Pool Settings” on page 88
• “Pools and Pool Types” on page 33
Note: WORM or UDO volumes, which are manually initialized, must be added
to the document service before they can be attached to a pool (see “Adding
Volumes to Document Service” on page 62).
• External Archives
Logical archives of known servers. These archives are located on known servers
and can be reached for retrieval (see “Adding and Modifying Known Servers”
on page 177).
For each original archive, you give a name and configure a number of settings:
• Encryption, compression, blobs and single instance affect the archiving of a
document.
• Caching and Archive Cache Servers affect the retrieval of documents (see
“Configuring Archive Access Via an Archive Cache Server” on page 204).
• Signatures, SSL and restrictions for document deletion define the conditions for
document access.
• Timestamps and certificates for authentication ensure the security of documents.
• Auditing mode, retention and deletion define the end of the document lifecycle.
Some of these settings are pure archive settings. Other settings depend on the
storage method, which is defined in the pool type. The most relevant decision
criterion for their definition is single file archiving or container archiving.
Note on IXW pools
Volumes of IXW pools are regarded as container files. Although the documents
are written as single files to the medium, they cannot be deleted individually,
neither from finalized volumes (which are ISO volumes) nor from non-
finalized volumes using the IXW file system information.
Of course, you can use retention also with container archiving. In this case, consider
the delete behavior that depends on the storage method and media (see “When the
Retention Period Has Expired” on page 217).
and the job is finished. The next time the Write job starts, the new data is
compressed and the amount of data is checked again.
HDSK pool When you create an HDSK pool, the Compress_<Archive name>_<Pool name> job is
created automatically for data compression. This job is activated by default.
Important
• OpenText strongly recommends not using single instance in combination
with retention periods for archives containing pools for single file
archiving (FS, VI, HDSK).
• If you want to use SIA together with retention periods, consider “Reten-
tion” on page 69.
Excluding If necessary, you can exclude component types (formats) from Single Instance
formats from SIA Archiving. Microsoft Exchange and Lotus Notes emails are excluded by default
because their bodies are unique, although the attachments are archived with SIA.
SIA and ISO Be careful when using Single Instance Archiving and ISO images: Emails can consist
images of several components, e.g., logo, footer, attachment, which are handled by Single
Instance Archiving. Using ISO images, these components can be distributed over
several images. When reading an email, several ISO images must be accessed to
read all the components in order to recompose the original email. Caching for
frequently used components and proper parameter settings will improve the read
performance.
SIA for emails For emails, archiving in single instance mode decomposes emails, which means that
attachments are removed from the original email and are stored as separate
components on Archive Server. As soon as an email is retrieved from Content
Server, it is checked whether the email needs to be recomposed. If so, the
appropriate attachments are reinserted into the email and the complete email is
passed to Content Server.
Important
If you use OpenText Email Archiving or Management, do not use the Email
Composer additionally.
(De-)Composing For both archiving and retrieval requests, a dedicated filter is used to identify
filters components to be decomposed or composed. The archiving filter applies to archives
that are enabled for SIA. The retrieval filter applies to all archives. If your system is
not configured for archiving emails, disable composing and decomposing as
described below.
Important
If your system is configured for archiving emails, do not modify these filters.
Configuring Composing or decomposing emails can use a lot of memory, which has impact on
email (de- the performance. Therefore, you can configure how large emails or handled as
)composing
described below.
5.1.3 Retention
Introduction This part explains the basic retention handling mechanism of Archive Server.
OpenText strongly recommends reading this part if you use retention periods for
documents. For administration, see “Configuring the Archive Retention Settings” on
page 81.
Retention period The retention period of a document defines a time frame, during which it is
impossible to delete or modify the document.
The retention period – more precisely the expiration date of the retention period – is
a property of a document and is stored in the database and additionally together
with the document on the storage medium, if possible.
Compliance Various regulations require storing documents for a defined retention period. To
facilitate compliance with regulations and meet the demand of companies, Archive
Server can handle retention of documents in cooperation with the leading
application and the storage subsystem. The leading application manages the
retention of documents, and Archive Server executes the requests or passes them to
the storage system.
To meet compliance, the content of documents needs to be physically protected or
protected by a system supporting a WORM capability or by optical media. This
means that it is not sufficient to store the components with a specified retention
period on a simple hard disk.
Retention The following table lists settings and their impact on the retention behavior (see
behavior “Configuring the Archive Retention Settings” on page 81):
Setting Description
Deferred Deferred archiving prevents Archive Server from writing the con-
archiving tent from the disk buffer to the storage system until another call
removes the deferred flag from the document. This can be useful in
combination with EVENT retention, if the retention cannot be set
during the creation of the document.
Destroy Destroy activates overwriting the document several times before
purging. Destroy is not available for all storage system.
Terms used The terms storage system or storage platform are used for any long-term storage device
supported by Archive Server, such as optical media, Content-Addressed Storage
(CAS), Network-Attached Storage (NAS), Hierarchical Storage Management
Systems (HSM) and others. The term delete refers to the logical deletion of a
component and the term purge is used to describe the cleanup of content on the
storage system.
See also:
• “Configuring the Archive Retention Settings” on page 81
• “When the Retention Period Has Expired” on page 217
• If retention periods vary strongly, delete requests for the documents will
spread over a long period. In this case, single document storage should be
preferred.
• If documents stored within the same archive have a similar retention pe-
riod, the retention will expire within a short time window for these docu-
ments. In this case, ISO images can be used for storage.
Retention on The following table lists the storage systems and their retention handling.
storage systems
For the concrete retention support of the storage system, refer to the storage release
notes.
BLOB
Take care when using containers such as BLOBs. A BLOB has a retention which
is the maximum retention of all documents within the BLOB.
Activating event-based for documents in a BLOB will lead to retention period of
INFINITE for the whole BLOB on the storage system.
Single documents within a BLOB cannot be copied and nor be purged, BLOBs
can only be copied or purged as a whole.
Purge process A document or component can be deleted after the retention of the document has
expired or no retention has been applied.
The leading application can delete a single component or delete the document.
Deleting a document implies that all components are deleted and then the document
itself. Due to the nature of storage, deletion cannot be handled within a transaction.
Purge process
ISO, BLOB, WORM
Delete requests cannot be propagated to the storage system.
The document is deleted in Archive Server. The content remains on the storage
system until all documents on the media or container have been deleted. The
DELETE_EMPTY_VOLUMES job purges the container files on the storage
system.
Single file pools
Delete requests for the components and documents initiate a synchronous purge
request on the storage system.
The following error situation can arise:
Storage system reports an error when the document or component is to be
deleted.
• For documents: The document information in Archive Server is deleted (as all
component information is already deleted).
• For components: The component information in Archive Server is deleted.
Note: This is new for versions from 10.0 on. In former versions, the
leading applications received an error message and the component
information was not deleted.
The leading application gets a success message. In addition, an
administrative notification is sent. A job will regularly retry to purge the
orphaned content on the storage system (version 9.7.0 or later).
If in doubt, contact OpenText Customer Support.
Purging content In single file archiving scenarios, the content on the storage system is purged during
the delete command. Content on ISO images or optical WORMs cannot be purged,
and an additional job is necessary to purge the content as soon as all content of the
partition is deleted from Archive Server.
The purging capabilities depend on storage system and pool type. The following
table lists the purge behavior depending on the pool type.
Note: If the document’s retention date has changed on the original server due
to a migrate call, the new values are only held by Archive Server and not
written to the ATTRIB.ATR file, which holds the technical metadata of the
document. The ATTRIB.ATR file will only be updated if the document is
updated, e.g., if a component is added on the original server or if the document
is copied to a different volume.
As soon as the updated ATTRIB.ATR has been replicated to the Remote Standby
Server, the new retention value will be known on the Remote Standby Server.
If there is a retention period in the source image available, the retention settings
of the device file are ignored.
• The retention of the source image has not yet expired: The target image will
inherit the retention of the remaining period.
• The retention has already expired or was set to NONE: No retention will be
applied to the target image.
Note: After creating the logical archive, default configuration values are for all
settings are provided. If you want to change these settings, open the Properties
window and modify the settings of the respective tab.
General The description of the new archive can be viewed and modified (open Properties in
information the action pane and select the General tab).
SSL
Specifies whether SSL is used in the selected archive for authorized,
encrypted HTTP communication between the Imaging Clients, Archive
Servers, Archive Cache Servers and OpenText Document Pipelines.
• Use: SSL must be used.
• Don't use: SSL is not used.
• May use: The use of SSL for the archive is allowed. The behavior
depends on the clients' configuration parameter HTTP UseSSL (see also
the Open Text Imaging Viewers and DesktopLink - Configuration Guide (CL-
CGD) manual).
OpenText Imaging Java Viewer does not support SSL.
Document deletion
Here you decide whether deletion requests from the leading application are
performed for documents in the selected archive, and what information is
given. You can also prohibit deletion of documents for all archives of the
Archive Server. This central setting has priority over the archive setting.
See also: “Setting the Operation Mode of Archive Server” on page 332.
Deletion is allowed
Documents are deleted on request, if no maintenance mode is set and the
retention period is expired.
Deletion Causes error
Documents are not deleted on request, even if the retention period is
expired. A message informs the administrator about deletion requests.
4. Click OK to resume.
Blobs
Activates the processing of blobs (binary large objects).
Very small documents are gathered in a meta document (the blob) in the disk
buffer and are written to the storage medium together. The method
improves performance. If a document is stored in a blob, it can be destroyed
only when all documents of this blob are deleted. Thus, blobs are not
supported in single-file storage scenarios and should not be used together
with retention periods.
Single instance
Enables single instance archiving.
See also: “Single Instance” on page 67.
Deferred archiving
Select this option, if the documents should remain in the disk buffer until the
leading application allows Archive Server to store them on final storage
media.
Example: The document arrives in the disk buffer without a retention period
and the leading application will provide the retention period shortly after.
The document must not be written to the storage media before it gets the
retention period. To ensure this processing, enable the Event based
retention option in the Edit Retention dialog box; see “Configuring the
Archive Retention Settings” on page 81.
Audit enabled
If auditing is enabled, all document-related actions are audited (see
“Configuring Auditing” on page 315).
Cache enabled
Activates the caching of documents to the DS cache at read access.
Cache
Pull down menu to select the cache path. Before you can assign a cache path,
you must create it. (See “Creating and Deleting Caches” on page 54 and
“Configuring Caches” on page 53).
4. Click OK to resume.
No retention
Use this option if the leading application does not support retention, or if
retention is not relevant for documents in the selected archive. Documents
can be deleted at any time if no other settings prevent it.
No retention – read only
Like No retention, but documents cannot be changed.
Retention period of x days
Enter the retention period in days. The retention period of the document is
calculated by adding this number of days to the archiving date of the
document. It is stored with the document.
Event based retention
This method is used if a retention period is required but at the time of
archiving, it is unknown when the retention period will start. The leading
application must send the retention information after the archiving request.
When the retention information arrives, the retention period is calculated by
adding the given period to the event date. Until the document gets the
calculated retention period it is secured with maximum (infinite) retention.
You can use the option in two ways:
Together with the Deferred archiving option
The leading application sends the retention period separately from and
shortly after the archiving request (for example, in Extended ECM for
SAP Solutions). The documents should remain in the disk buffer until
they get their retention period. They are written to final storage media
together with the calculated retention period when the leading
application requests it. To ensure this scenario, enable the Deferred
archiving option in the Settings tab; see “Configuring the Archive
Settings” on page 80. Regarding storage media and deletion of
documents, the scenario does not differ from that with a given Retention
period of x days.
Without the Deferred archiving option
The retention period is set a longer time after the archiving request, and
the document should be stored on final storage media during this time.
For example, in Germany, personnel files of employees must be stored
for 5 years after the employee left the company. The files are immediately
archived on storage media, and the retention period is set at the leaving
date. This scenario is only supported for archives with HDSK pool or
Single File (VI) pool (if supported by the storage system). In all other
pools, the documents would be archived with infinite retention, and the
retention period cannot be changed after archiving (only with migration).
For the same reason, do not use blobs in this scenario.
Infinite retention
Documents in the archive never can be deleted. Use this setting for
documents that must be stored for a very long time.
Destroy (unrecoverable)
This additional option is only relevant for archives with hard disk storage. If
enabled, the system at first overwrites the file content several times and then
deletes the file.
4. Click OK to resume.
Important
Documents with expired retention period are only deleted, if:
• document deletion is allowed; see “Configuring the Archive Security
Settings” on page 79, and
• no maintenance mode is set; see “Setting the Operation Mode of Archive
Server” on page 332.
See also:
• “Retention” on page 69
• “When the Retention Period Has Expired” on page 217
ArchiSig
Enables ArchiSig timestamp usage, i.e., an ArchiSig timestamp is generated
for the archived documents.
For a description of ArchiSig, see “Timestamp Usage” on page 111.
4. In the Verification area, select one of the following options:
None
Timestamps are not verified. Each requested document is delivered.
Relaxed
Timestamps are verified. Each requested document is delivered. If the
timestamp cannot be verified, an auditing entry is written (if auditing is
enabled).
Strict
Timestamps are verified. Requested documents are delivered only if the
timestamp is verified.
In addition, an auditing entry is written (if auditing is enabled).
Note: Even if no timestamps are used, documents can have timestamps
assigned by clients. If not verified, these documents cannot be
delivered.
5. Click OK to resume.
Scheduling the To schedule the associated compression job, select the pool and click Edit Compress
compression job Job in the action pane. Configure the scheduling as described in “Configuring Jobs
and Checking Job Protocol” on page 95.
Modifying a To modify pool settings, select the pool and click Properties in the action pane. Only
HDSK pool the assignment of the storage tier can be changed.
To create a pool:
1. Select Original Archives in the Archives object in the console tree.
2. Select the designated archive in the console tree.
3. Click New Pool in the action pane. The window to create a new pool opens.
4. Enter a unique (per archive), descriptive Pool name. Consider the naming
conventions; see “Naming rule for archive components” on page 65
5. Select the designated pool type and click Next.
6. Enter additional settings according to the pool type:
• “Write At-Once Pool (ISO) Settings” on page 86
• “Write Incremental (IXW) Pool Settings” on page 88
• “Single File (VI, FS) Pool Settings” on page 90
7. Click Finish to create the pool.
8. Select the pool in the top area of the result pane and click Attach Volume. A
window with all available hard-disk volumes opens (see “Creating and
Modifying Disk Volumes” on page 46).
9. Select the designated disk volume and click OK to attach it.
10. Schedule the Write job; see “Configuring Jobs and Checking Job Protocol” on
page 95.
Modifying a pool To modify pool settings, select the pool and click Properties in the action pane.
Depending on the pool type you can modify settings or assign another buffer.
Important
You can assign another buffer to the pool. If you do so, make sure that:
• all data from the old buffer is written to the storage media,
• the backups are completed,
• no new data can be written to the old buffer.
Data that remains in the buffer will be lost after the buffer change.
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and Modifying Storage Tiers” on
page 91).
Buffering
Used disk buffer
Select the designated buffer (see “Configuring Buffers” on page 47).
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring Jobs and Checking Job Protocol” on page 95.
Original jukebox
Select the original jukebox.
Volume Name Pattern
Defines the pattern for creating volume names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL) for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in Configuration, search
for the Volume name prefix variable (internal name: ADMS_PART_PREFIX;
see “Searching Configuration Variables” on page 212). You can define any
pattern, only the placeholder $(SEQ) is mandatory. You can also insert a fixed
text. The initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Allowed media type
Here you specify the permitted media type. ISO pools support:
DVD-R You find the supported DVD-R types in the Release Notes Storage Platforms;
see the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
WORM You find the supported WORM types in the Release Notes Storage Platforms;
see the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
HD-WO HD-WO is the media type supported with many storage systems. An HD-WO
medium combines the characteristics of a hard disk and WORM – fast access
to documents and secure document storage. Enter also the maximum size of
an ISO image in MB, separated by a colon:
For some storage systems, the maximum size is not required; see the docu-
mentation of your storage system in the Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/Open/12331031).
Number of volumes
Number of ISO volumes to be written in the original jukebox. This number
consists of the original and the backup copies in the same jukebox. For virtual
jukeboxes (HD-WO media), the number of volumes must always be 1, as
backups must not be written to the same medium in the same storage system.
Minimum amount of data
Minimum amount of data to be written in MB. At least this amount must have
been accumulated in the disk buffer before any data is written to storage media.
The quantity of data that you select here depends on the media in use. For HD-WO
media type, the value must be less than the maximum size of the ISO image that
you entered in the Allowed media type field.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally in a
second jukebox of this Archive Server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
See also:
• “Creating and Modifying Pools with a Buffer” on page 85
• “Pools and Pool Types” on page 33
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and Modifying Storage Tiers” on
page 91).
Buffering
Used disk buffer
Select the designated buffer (see “Configuring Buffers” on page 47).
Initializing
Auto initialization
Select this option if you want to initialize the IXW media in this pool
automatically; see also “Initializing Storage Volumes” on page 60.
Original jukebox
Select the original jukebox.
Volume Name Pattern
Defines the pattern for creating volume names.
$(PREF)_$(ARCHIVE)_$(POOL)_$(SEQ) is set by default. $(ARCHIVE) is the
placeholder for the archive name, $(POOL for the pool name and $(SEQ) for an
automatic serial number. The prefix $(PREF) is defined in Configuration, search
for the Volume name prefix variable (internal name: ADMS_PART_PREFIX;
see “Searching Configuration Variables” on page 212). You can define any
pattern, only the placeholder $(SEQ) is mandatory. You can also insert a fixed
text. The initialization of the medium is started by the Write job.
Click Test Pattern to view the name planned for the next volume based on this
pattern.
Allowed media type
The media type is always WORM, for both WORM and UDO media.
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring Jobs and Checking Job Protocol” on page 95.
Number of drives
Number of write drives that are available on the original jukebox.
Auto finalization
Select this option if you want to finalize the IXW media in this pool
automatically; see also “Finalizing Storage Volumes” on page 233.
Filling level of volume: ... %
Defines the filling level in percent at which the volume should be finalized. The
Storage Manager automatically calculates and reserves the storage space
required for the ISO file system. The filling level therefore refers to the space
remaining on the volume.
and last write process: ... days
Defines the number of days since the last write access.
Backup
Backup enabled
Enable this option if the volumes of a pool are to be backed up locally in a
second jukebox of this Archive Server. During the backup operation, the
Local_Backup jobs only considers the pools for which backup has been enabled.
Backup jukebox
Select the backup jukebox.
Number of backups
Number of backup media that is written in the backup jukebox.
Number of drives
Number of write drives that are available on the backup jukebox. The setting is
only relevant or physical jukeboxes.
See also:
• “Creating and Modifying Pools with a Buffer” on page 85
• “Pools and Pool Types” on page 33
Storage Selection
Storage tier
Select the designated storage tier (see “Creating and Modifying Storage Tiers” on
page 91).
Buffering
Used disk buffer
Select the designated buffer (see “Configuring Buffers” on page 47).
Writing
Write job
The name of the associated Write job is created automatically. The name can
only be changed during creation, but not modified later. To schedule the Write
job, see “Configuring Jobs and Checking Job Protocol” on page 95.
Documents written in parallel
Number of documents that can be written at once.
See also:
• “Creating and Modifying Pools with a Buffer” on page 85
• “Pools and Pool Types” on page 33
Modifying To modify a storage tier, select it and click Properties in the action pane. Proceed in
storage tiers the same way as when creating a storage tier.
See also:
• “Creating and Modifying Pools” on page 84
Important
In case you are using Archive Cache Server, consider that a re-initialization
in secure environments can only work if the current certificates are available
on the Archive Cache Server. To avoid problems, the Update documents
security setting must be deselected before certificates are enabled; see step 3.
To enable certificates:
1. Select the logical archive in the Original Archives or Replicated Archives object
of the console tree.
Tip: Alternatively, you can also navigate to System > Key Store >
Certificates.
2. Select the Certificates tab in the result pane.
For scenarios using an Archive Cache Server, go on with step 3.
Otherwise, go on with step 4.
3. If an Archive Cache Server is assigned to a logical archive, proceed as follows:
a. Select Original Archives in the Archives object of the console tree.
b. Select the logical archive in the console tree.
c. Click Properties in the action pane and select the Security tab.
d. Temporarily clear Update documents and click OK.
4. Select the respective certificate by its name (in the result pane).
5. Click Enable or Disable in the action pane.
The certificate is enabled or disabled, respectively.
Command Description
Write_CD Writes data from disk buffer to storage media as ISO images, belongs
to ISO pools.
Write_WORM Writes data incrementally from disk buffer to WORM and UDO, be-
longs to IXW pools.
Write_GS Writes single files from disk buffer to a storage system through the
interface of the storage system (vendor interface), belongs to Single
File (VI) pools.
Command Description
Write_HDSK Writes single files from disk buffer to the file system of an external
storage system, belongs to Single File (FS) pools.
Purge_Buffer Deletes the contents of the disk buffer according to conditions; see
“Configuring Buffers” on page 47.
backup_pool Performs the backup of all volumes of a pool.
Compress_HDSK Compresses the data in an HDSK pool.
Command Description
Copy_Back Transfers cached documents from the Archive Cache Server to the
Archive Server. The Copy_Back job is disabled by default and must
only be enabled for Archive Servers with enabling “write back”
mode. See “Configuring Archive Cache Server” on page 193. By
default, documents not older than three days are transferred. A
message appears if there are older documents remaining. The default
setting can be modified by changing the job settings.
Add the argument: -i <days> to set the interval.
To start and stop certain jobs, see “Starting and Stopping Jobs” on page 98.
To create a job:
1. Select Jobs in the System object in the console tree.
2. Select the Jobs tab in the top area of the result pane.
3. Click New Job in the action pane. The wizard to create a new job opens.
4. Enter a name for the new job. Select the command and enter the arguments
depending on the job.
Name
Unique name of the job that describes its function so that you can distinguish
between jobs having the same command. Do not use blanks and special
characters. You cannot modify the name later.
Command
Select the job command to be executed. See also “Important Jobs and
Commands” on page 95.
Argument
Entries can expand the selected command. The entries in the Arguments
field are limited to 250 characters. See also “Important Jobs and Commands”
on page 95.
5. Select the start mode of the job and click Next.
6. Depending on the start mode, define the scheduling settings or the previous job.
See also “Setting the Start Mode and Scheduling of Jobs” on page 100.
7. Click Finish to complete.
Modifying jobs To modify a job, select it and click Edit in the action pane. Proceed in the same way
as when creating a job.
• Monitor the job messages and check the time period the jobs take. Adapt the job
scheduling accordingly.
• Only one drive is used for Write jobs on WORM/UDO. Therefore, only one
WORM/UDO can be written at a time. That means, only one logical archive can
be served at a time.
• Backup jobs need two drives, one for the original, one for the backup media.
7.1 Overview
Introduction Archive Server provides several methods to increase security for data transmission
and data integrity:
• secKeys / signed URLs, for verification of URL requests (see “Authentication
Using Signed URLs” on page 104).
• Protection of files and documents (see “Encrypted Document Storage” on
page 106).
• Timestamps to ensure that documents were not modified unnoticed in the
archive (see “Timestamp Usage” on page 111 and “Configuring OpenText
Archive Timestamp Server” on page 129).
These methods make use of:
• Certificates, for authentication, encryption and timestamps (see “Certificates” on
page 117).
• Checksums to recognize and reveal unwanted modifications to the documents
on their way through the archive (see “Using Checksums” on page 126).
Configuration The main GUI elements used for configuration and administration of security
and settings include:
administration
• The Archives node: each time a new archive is added or new pools are created,
security settings are to be configured (Security tab of the Properties dialog).
• The Key Store in the System object of the console tree: used for configuration of
certificates and system keys.
Structure of this This topic describes the main tasks for configuration and administration of security
topic settings. General procedures (e.g. enabling a certificate) are described once and
referred to thereafter.
For each main task, a list of procedures, named “How to ...” tells you what to do.
Further You can find more information on security topics in the “Security” folder in the
information Knowledge Center
(https://knowledge.opentext.com/knowledge/llisapi.dll/open/15491557).
Configuration settings concerning security topics are described in more detail in the
“Configuration Parameter Reference”; see the following:
• Section 35.2 "Archive Server" in OpenText Archive Server - Administration Help
(AR-H-ACN)
To activate secKeys:
1. Select the logical archive in the Original Archives object of the console tree.
2. Click Properties in the action pane. The property window of the archive opens.
3. Select the Security tab. Check the settings and modify them, if needed.
Authentication (SecKey) Required To
Set the archive-specific access permissions:
• Read documents
• Update documents
• Create documents
• Delete documents
4. Click OK to resume.
Example for the a result: the <key>.pem file contains the private key and is used
to sign the URL. <cert>.pem contains the public key and the certificate that
Archive Server uses to verify the signatures.
2. Store the certificate and the private key on the server of your leading application
(see the corresponding Administration Guide for details). Correct the path, if
necessary, and add the file names.
By storing the certificates in the file system, they are recognized by Enterprise
Scan and the client programs.
Important
For security reasons, limit the read permission for these directories to
the system user (Windows) or the archive user (UNIX).
3. To provide the certificate to the Archive Server use one of the following options:
• Import the certificate, see “Importing an Authentication Certificate” on
page 123.
Or:
• Send the certificate with the putcert command (see Table 7-3 on page 121).
Repeat this step, if you want to use the certificate for several archives.
4. Enable the certificate (see “Enabling a Certificate” on page 119).
For document encryption, a symmetric key (system key) is used. The administrator
creates this system key and stores it in the Archive Server's keystore. The system key
itself is encrypted on the Archive Server with the Archive Server’s public key and
can then only be read with the help of the Archive Server's private key. RSA
(asymmetric encryption) is used to exchange the system key between the Archive
Server and the remote standby server.
Encryption of documents can be enabled per logical archive.
Exception HDSK pools (write through)
HDSK pools do not use a buffer. To encrypt documents use the designated
Compress_ job, see “Data Compression” on page 66.
Note: HDSK pools are not released for use in productive archive systems. Use
them only for test purposes.
How to ... setup document encryption:
• “Activating Encryption Usage for a Logical Archive” on page 107
• “Creating a System Key for Document Encryption” on page 107
• “Exporting and Importing System Keys” on page 108
• “Configuring a Certificate for Document Encryption” on page 125
System keys are encrypted using the encryption certificate (see “Configuring a
Certificate for Document Encryption” on page 125).
Caution
Be sure to store this key securely, so that you can re-import it if necessary.
If the key gets lost, the documents that were encrypted with it can no
longer be read!
Do not delete any key if you set a newer one as current. It is still used for
decryption.
Handling for The Synchronize_Replicates job updates the system keys and certificates between
replicated Archive Servers before it synchronizes the documents. The system keys are
archives
transmitted encrypted.
If you do not want to transmit the system keys through the network, you can also
export them from the original server to an external data medium and re-import
them on the remote standby server (see “Exporting and Importing System Keys” on
page 108).
Important
In the case of system failure or restore scenarios it can be vital to have
backups of the system key (and the related certificates).
E
Exports the contents of the System key node. Use the export in particular to
store the system keys for document encryption.
The user must log on and specify a path for the export files. The option -t NN:MM
splits the contents of the key store into several different files (MM; maximum 8).
At least NN files must be reimported in order to restore the complete key store.
Example:
sunny:~> /usr/ixos-archive/bin/recIO E -t 3:5
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Please authenticate!
User :dsadmin
Password :
Writing keystore with 3 system-keys to 5 token-files (3 required to restore)
Token[1/5] (default = /floppy/key.pem )
File (CR to accept above) : p1.pem
Token[2/5] (default = /floppy/key.pem )
File (CR to accept above) : p2.pem
Token[3/5] (default = /floppy/key.pem )
File (CR to accept above) : p3.pem
Token[4/5] (default = /floppy/key.pem )
File (CR to accept above) : p4.pem
Token[5/5] (default = /floppy/key.pem )
File (CR to accept above) : p5.pem
V
Verifies the contents of the System key node against the exported files.
The user must log on and specify the path for the exported data. Then the
exported data is compared with the key store on the Archive Server.
Example:
sunny:~> /usr/ixos-archive/bin/recIO V
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = /floppy/key.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/key.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/key.pem)
File (CR to accept above) : p3.pem
key 1 : 1EE312C064A27F73 : OK
key 2 : BEEB5213EF5FFABF : OK
key 3 : 10C8D409E585E43B : OK
D
Displays the information on the exported files. The information is shown in a
table.
Example:
sunny:~> /usr/ixos-archive/bin/recIO D
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Token[1/?] (default = /floppy/key.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/key.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/key.pem)
File (CR to accept above) : p3.pem
idx ID created origin
---------------------------------------------------
1 EA03BDAF9ABB85A1 2010/01/18 17:26:01 sunny
2 1EE312C064A27F73 2009/11/03 14:28:08 hausse
3 BEEB5213EF5FFABF 2009/11/08 09:26:36 emma
I
Imports the saved contents of the System key node.
The user must log on and specify the path for the exported data. The data in the
System key node is restored, encrypted with the Archive Server's public key and
sent to the administration server. The results are displayed. Keys already
contained in the Archive Server's store are not overwritten.
Example:
sunny:~> /usr/ixos-archive/bin/recIO V
IMPORTANT: -----------------------------------------------------
IMPORTANT: recIO (release) 10.0.0.724
IMPORTANT: -----------------------------------------------------
recIO 10.0.0.724 (C) 2001-2010 Open Text Corporation
This product includes software developed by the OpenSSL Project
for use in the OpenSSL Toolkit (http://www.openssl.org/)
Please authenticate!
User :dsadmin
Password :
Token[1/?] (default = /floppy/key.pem)
File (CR to accept above) : p1.pem
Token[2/3] (default = /floppy/key.pem)
File (CR to accept above) : p2.pem
Token[3/3] (default = /floppy/key.pem)
File (CR to accept above) : p3.pem
ID:BEEB5213EF5FFABF created:2000/11/08 09:26:36 origin:emma
Key already exists
ID:276CBED602BDFC25 created:2010/01/18 12:09:32 origin:arthomasa
Key successfully imported
A job builds the hash tree that consists of hash values of as many documents as
configured, and adds one single timestamp. Thus, you can collect, for example, all
documents of a day in one hash tree. Only one timestamp per hash tree is required.
The verification process needs only the document and the hash chain leading from
the document to the timestamp but not the whole hash tree:
Document Each document component gets a timestamp when it arrives in the archive – more
timestamps precisely: when it arrives in the disk buffer and is known to the Document Service.
This (old) method requires a huge amount of timestamps, depending on the number
of documents. Thus, it is available only for archives that used timestamps in former
Archive Server versions. You can migrate these timestamps to ArchiSig timestamps;
see “Migrating Existing Document Timestamps” on page 116.
Configuration You can set up signing documents with timestamps and the verification of
timestamps including the response behavior for each archive (see “Configuring the
Archive Settings” on page 80). Consider the recommendations given above.
If you use both methods in parallel, the document timestamp secures the document
until the hash tree is built and signed. As this time period is short, a document
timestamp is sufficient for these documents, while the hash tree, in general, gets a
timestamp created with a certificate of an accredited provider. This trusted
certificate is used for verification.
ArchiSig timestamps have a better performance and can be easily renewed.
Note: Document timestamps are only shown to ensure compatibility. You
cannot use them for new archives.
Timestamps and hash trees may become invalid or unsafe. To prevent this, they can
be renewed, see “Renewing Timestamps of Hash Trees” on page 116 and
“Renewing Hash Trees” on page 115.
Remote Standby In a Remote Standby environment, the Synchronize_Replicates job replicates the
timestamp certificates. Only enabled certificates are copied. The certificate on the
Remote Standby Server is automatically enabled after synchronization.
• timeproof TSS80
• AuthentiDate
• Quovadis
• OpenText Archive Timestamp Server
Important
The name of the pool is determined by the Pool for timestamps
configuration variable (internal name: AS.DS.TS_POOL). Its default value
is ATS_POOL, which means that you must call the pool POOL.
If the name of the pool and the value of the variable do not fit, the job
building the hash tree will fail.
2. In Jobs in the System object of the console tree, create jobs to build the hash
trees. You need one job for each archive that uses timestamps.
See also: “Configuring Jobs and Checking Job Protocol” on page 95.
Command
hashtree
Arguments
Archive name
Scheduling
If you use ArchiSig timestamps, schedule a nightly job. If the hash trees are
written to a storage system, make sure that the job is finished before the
Write job starts.
To renew timestamps:
1. Configure a new certificate on your timestamp server, make sure that is
available for the Archive Server and enable it in the Timestamp Certificates tab
in the Certificates entry in Key Store in the System object of the console tree
Details: “Timestamp Usage” on page 111.
2. In a command line, enter:
dsHashTree show names
3. In the resulting list, find the distinguished subject name(s) of your timestamp
service (subject of the service’s certificate).
4. In a command line, enter:
dsHashTree -a <ArchiveName> -s <DistinguishedNameOfOldCertificate>
The process finds all timestamps that were created with the certificate indicated in
the command. It calculates hash values for the timestamps and builds new hash
trees. Each hash tree is signed with a new timestamp.
Important
You can migrate document timestamps only once! Never disable ArchiSig
timestamps after starting migration.
3. Call the hash tree creation tool for each archive with migrated timestamps:
dsHashTree <archive name>
The tools calculate hash values from the existing timestamps, build hash trees and
get a timestamp for each tree.
7.5 Certificates
Certificates A certificate is an electronic document which uses a digital signature to bind
together a public key with information on the client issuing this public key
(information such as the name of a person or an organization, their address, and so
forth). The certificate can be used to verify that a public key belongs to an
individual, e.g., an archive uses this information to verify requests based on signed
URLs from various clients.
Certificate use Archive Server uses certificates for various use cases:
cases
To check a certificate:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click View
Certificate in the action pane.
4. Check the general information and the certification path.
General
This tab provides detailed information to identify the certificate
unambiguously: the certificate's issuer, the duration of validity, and the
fingerprint.
Certification Path
Here you can follow the certificate's path from the root to the current
certificate. A certificate can be created from another certificate. The path
shows the complete derivation chain. You can also view the parent certificate
information from here.
To enable a certificate:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective certificate by its name and click Enable in the action pane
pane.
To delete a certificate:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates object and select the appropriate <certificate> tab in the
result pane.
All certificates of the selected certificate type are listed.
3. Select the respective tab and the designated certificate and click Delete
Certificate in the action pane.
4. Confirm the upcoming message with OK.
If you have to manage a large number of certificates, make sure that the AuthIDs
and the names of the certificates are unique.
Command: The following table describes the command to be used to create self-signed
generate certificates.
certificate
Command: The following table describes the command to be used to request a certificate from a
request trust center.
certificate
Send your <requestOutFile> to a trust center. The trust center will return you a
certificate including the public key. The certificate from the trust center must be in
pem format.
Command: send The following table describes the command to be used to send a certificate to
certificate Archive Server. After using the Refresh action (System –> Key Store –>
(putCert)
Certificates), the certificates sent using putCert are displayed at Archive Server.
Note: Hint: putCert cannot be used with SSL. To transfer the certificate to the
server switch the SSL settings for the logical archive to May use or Don’t use.
Alternatively, if provided, you can also use dsh to send the certificate to Archive
Server.
For the <archive> variable, enter the logical archive on the Archive Server for
which the certificate is relevant. Replace the <file> variable with the name of the
certificate, i.e. cert.pem.
If you need the certificate for several archives, call the command again for each
archive.
4. Quit the program with exit.
Important
Any change to the settings affects all archives that use this certificate!
To grant privileges:
1. Select Key Store in the System object of the console tree.
2. Select the Certificates entry in the result pane and then the Global tab. All
imported certificates are listed.
3. Select the designated certificate and click Change Privileges in the action pane.
4. Select (set check box) the privileges you want to assign to the certificate. The
following privileges are available:
• Read documents
• Create documents
• Update documents
• Delete documents
• Pass by
This privilege is only evaluated in Enterprise Library scenarios. Pass by must
be set for the certificate of the
• Archive Storage Provider
Enterprise Scan Enterprise Scan generates checksums for all scanned documents and passes them on
to Document Service. Document Service verifies the checksums and reports errors
(see “Monitoring with Notifications” on page 293). On the way from Document
Service to STORM, the documents are provided with checksums as well, in order to
recognize errors when writing to the media.
Timestamp and The leading application, or some client, can also send a timestamp (including
checksum checksum) instead of the document checksum; see “Timestamp Usage” on page 111.
Verification can check timestamps as well as checksums.
The certificates for those timestamps must be known to the Archive Server and
enabled, before the timestamp checksums can be verified (see “Importing a
Certificate for Timestamp Verification” on page 126).
• You can retrieve the Subject from the certificate and use it as application ID
(name of the application); see the procedure below.
Configuration variables:
• Path to the certificate <n>
• Passphrase for the private key
• Path to the private key
A window to check and modify the parameters which control the behavior of
Archive Timestamp Server and the environment for Archive Timestamp Client
opens. Changes made in this window will not be used until Archive Timestamp
Server is restarted.
Location
Supply your location in a suitable format like <city>, <country>. The
minimum length of this string is 3 characters.
Server
This is the hostname of the computer on which Archive Timestamp Server
runs.
Port
The communication interface of Archive Timestamp Server is a TCP port.
Timestamp requests sent to this address will be processed if Archive
Timestamp Server is running and configured. Therefore, you must specify
the port number. The default value is 32001; any number between 1 and
32767 might work unless another process is using that port. Ports up to 1024
can only be used if Archive Timestamp Server runs with root privileges.
When in doubt, contact your system administrator.
Warning
A notification will be sent a given number of hours before the timeout is
reached. The status of the Timestamp service icon in Archive Monitoring
Web Client will change to “warning”. A setting of 0 disables this feature. See
also “Creating and Modifying Notifications” on page 297.
Time display
The main dialog retrieves the time from Archive Timestamp Server and
displays it permanently. It can show the time as GMT (Greenwich Mean
Time), or as a local time representation, or both formats at the same time.
Signature Key File
For a full configuration, you can leave this entry empty for now. If you want
to do a quick start, select the file <OT config AS>/timestamp/stampkey.-
pem. The passphrase for this key file is ixos.
Change Passphrase
You can change the passphrase, which protects the signature key. If you
change the passphrase, the key file will be re-written.
Note: Any older copy of that file will still be usable with the old
passphrase.
Timeout
Because the internal clock of a computer has limited precision, this setting
provides a possibility to set a timeout period in hours after which Archive
Timestamp Server refuses to timestamp incoming requests. The timeout
counter is reset every time you transmit the signing key as described in
“Starting Archive Timestamp Client” on page 131. A timeout setting of 0 will
disable this feature and leave the server running unlimited.
Administration
If Archive Timestamp Server is installed on a windows platform, Archive
Timestamp Client can be installed on the same machine. Otherwise, it can be
installed on a remote computer to do the administration via remote access.
Configuration requests will only be accepted by Archive Timestamp Server
if the remote host is specified in this line. Multiple hostnames and IP
addresses must be separated by semicolons (;). If no host is supplied, only
local administration is possible.
Allow remote administration from any host
This is not recommended! Selecting this check box causes Archive
Timestamp Server to accept configuration requests from any host. Only use
this for debugging or experimental purposes!
Timestamp Policy
Timestamps in the PKIX format (RFC 3161) contain an object identifier
(OID), which defines a timestamp policy. Leave the default value
(1.3.6.1.5.7.7.2) unless you know exactly what you need.
Notification
Enter the number of days before one of the certificates used expires. Starting
that day, Archive Timestamp Server starts sending a notification per day to
warn the administrator about the upcoming invalid certificate.
Passphrase(!)
This entry is needed for auto-initialization. If you enter a passphrase here, it
will be stored in Archive Timestamp Server's configuration in an encrypted
format. At startup time, Archive Timestamp Server can read and decrypt this
passphrase and use it to decode the signature key and initialize itself.
Hash Algorithm
If a certain hash algorithm is specified here, Archive Timestamp Server will
use that algorithm to create the signatures. The default setting is same as in
TS request which causes Archive Timestamp Server to use the same hash
algorithm for the signature as the one specified in the timestamp request it
receives from Archive Server.
Protocol file location
The path of the protocol file location.
Note: The path for the protocol file must exist or no protocol file will be
written. When starting up, Archive Timestamp Server reads the last
serial number issued and continues timestamping with the next serial
number. If no logfile exists, Archive Timestamp Server would begin
with serial number 1 to assign timestamps after each startup.
Maximum size
A maximum file size in kilobytes can be specified here. The protocol file will
be renamed to <filename>.old if its size exceeds the given value. A
previous old-file will be overwritten. If a size of 0 is specified, the protocol
file will grow infinitely.
2. Enter settings and click OK.
To restart Archive Timestamp Server, open a command line and enter
spawncmd restart timestamp
2. Click Generate keys. The Generate new key pair window opens.
3. Enter settings:
Passphrase
Enter the passphrase twice. This passphrase will be used to encrypt the key-
pair before storing it in a file.
Caution
The program can decrypt the key-pair only if you supply the
passphrase, so do not forget it. Archive Timestamp Server cannot
create timestamps without it. The usual good advice for password
selection and handling applies: use a difficult password, do not write
it down!
Key length
At least 1024 bits are recommended. Longer keys increase security and
validity time of the issued timestamps, but they also increase the time
needed to sign and verify those timestamps.
RSA/DSA
Selects the signature algorithm for which the key will be generated. RSA is
recommended since not all trust centers support DSA.
4. Click Start to generate the key.
After key generation, you will be asked where to store the key. You are basically free
to select the location. Two locations make special sense:
• In the <OT config AS>/timestamp/ directory. Easy to find but also readable by
an attacker.
• On a memory stick. The memory stick can be removed and stored in a secure
place. However, it is needed every time the key-pair is sent to Archive Time-
stamp Server, i.e. every time you start Archive Timestamp Server and every time
the timeout expires.
Auto-initialization If you are using auto initialization, the key must be stored on the Archive
Timestamp Server machine, for further information see “Using the Auto
Initialization Mode” on page 130
3. Enter the settings. The fields Country, Organization and Common Name are
mandatory. Common Name should be the fully qualified hostname of Archive
Timestamp Server. Organizational Unit, State / Province, Location and Email
are optional.
4. Click Generate Request to start.
If you have not used your passphrase since you started Archive Timestamp
Client, you will be asked for the passphrase now. If you stored the key pair on a
memory stick, make sure that the memory stick is inserted. The program needs
the private key to sign the certificate request.
5. Enter a filename and save the file. The contents of the file should look
something like this:
-----BEGIN CERTIFICATE REQUEST-----
MIICaDCCAiQCAQEwYzELMAkGA1UEBhMCREUxGTAXBgNVBAoTEElYT1MgU09GVFdB
UkUgQUcxDjAMBgNVBAsTBVRTMDAxMQ8wDQYDVQQHEwZNdW5pY2gxGDAWBgNVBAMT
...
I/ofikRvFV+fnw/kkddqr7VdNMH2oOHlozmgADALBgcqhkjOOAQDBQADMQAwLgIV
AJPkQtYi7uSSA3II6xeG6ucxJNz0AhUAh3acSLKnILYwnqdR7Vz8/R0b53s=
-----END CERTIFICATE REQUEST-----
6. Use the request in the file to apply for a certificate at a trust center in a PEM file
format.
Note: If Archive Timestamp Server for some reason does not grant you access
for configuration requests, the server’s system time is displayed but the status
values for Signature key, Certificates, Location, and Time only show a
question mark.
If you are performing remote administration (i.e. with Archive Timestamp
Client on your local host and Archive Timestamp Server on another computer),
make sure that the correct hostname for the administration host is entered on
the computer that runs Archive Timestamp Server (see “Configuring Basic
Settings” on page 131).
The debug output should give you a hint, why Archive Timestamp Server
refuses to start.
Checking the The general status of Archive Timestamp Server together with some details about its
status via Web configuration can also be retrieved and displayed with a standard Web browser.
browser
Use the following URL:
http://<servername>:<port>
Note: The status can only be retrieved on machines that are configured as
Administration hosts in Archive Timestamp Server setup. If Allow remote
administration from any host is selected, the Web status can be used on any
host, of course.
There is a link to Archive Timestamp Server's logfile. Following this link can take
some time if the logfile is large. Your browser may even hang or crash if the logfile
is too large. This is not a bug in the server software!
To transmit parameters:
1. Start Archive Timestamp Client and click Transmit Parameters.
2. Check the displayed time whether it is correct. If not, you must cancel this
dialog and adjust the time for Archive Timestamp Server first (see “Checking
and Adjusting the Time” on page 141).
3. Enter the passphrase and click OK.
After the full timeout period has passed without any transmission of the signature
key, the status becomes invalid and Archive Timestamp Server refuses to
timestamp any incoming requests.
If Archive Timestamp Server detects a manipulation of the system time, it will
immediately stop issuing timestamps. The status check shows invalid within the
next minute (the status is requested and updated every 60 seconds).
Note: Time adjustment is not possible when Archive Timestamp Server runs in
auto-initialization mode and the configuration has been set up outside Archive
Timestamp Client. In this case, the system time must be maintained on the
server, and Archive Timestamp Server must be restarted if the system time has
been set back.
5. Click OK to send this new time and date to Archive Timestamp Server.
6. Click Transmit Parameters again and provide your passphrase when asked (see
“Transmitting Configuration Parameters” on page 140).
• Second, it is verified that every certificate is currently valid and has not
expired. A certificate has expired is displayed otherwise.
• Finally all certificates are verified with the issuer's public keys (taken from
the issuer's certificates). If this fails, the error message Verification of
certification path failed is displayed.
5. If you receive errors, check whether the signature keys, the certificates and the
time settings are configured correctly (see “Configuring Certificates and
Signature Keys” on page 114, “Checking and Adjusting the Time” on page 141).
6. Click Transmit Parameters again and provide your passphrase when asked (see
“Transmitting Configuration Parameters” on page 140).
If no error occurs and you see the message Certification path verified
successfully, the configuration is correct and can be used to run Archive
Timestamp Server.
The name of the computer where the script tries to contact Archive
Timestamp Server. This can be a remote machine. If this item is not set,
localhost is used instead.
Log file configuration
These settings specify the level of detail written in the log files. They apply to
the components ixTkernel (Archive Timestamp Server), ixTstamp (Archive
Timestamp Client) and ixTwatch (the adapter for Archive Monitoring Web
Client).
8.3.1.3 Quovadis
Introduction Quovadis offers qualified timestamps over the Internet. This kind of service
provides the highest level of trustworthiness.
ArchiSig timestamps
Connection method (internal name: TS_CONNECTION)
Use TCP.
It is possible to use HTTP if your infrastructure requires that, but it is not
recommended because the HTTP header is only overhead and slows down the
timestamping. The port number would remain the same.
Timestamp server port (internal name: TS_PORT)
By default, Archive Timestamp Server uses port 32001. See configuration on
Timestamp Server side.
Hostname of the timestamp server (internal name: TS_HOST)
This can be localhost if Open Text Timestamp Server runs on the same host, or
the hostname or the IP address of the Timestamp Server.
Format of used timestamps (internal name: TS_FORMAT)
Use ietf (RFC 3161)
Timestamps (old)
Classic timestamps are neither supported nor recommendable with a timestamp
service over the Internet.
AS.DS.COMPONENT.ARCHISIG.TS_PORT
By default, Archive Timestamp Server uses port 32001. See configuration on
Timestamp Server side.
Hostname of the timestamp server (internal name: TS_HOST)
This can be localhost if Archive Timestamp Server runs on the same host, or the
hostname or the IP address of the Archive Timestamp Server.
Multiple hostnames can be configured separated by a semicolon. Individual port
numbers can be supplied with multiple hosts if appended to the hostname with
a colon in between.
Example: tshost1:32001;tshost2:10318
AS.DS.COMPONENT.TIMESTAMPS.TIME_STAMP_MODE
IETF (RFC 3161 without HTTP header). SIGIA4 timestamps are strongly
discouraged!
AS.DS.COMPONENT.TIMESTAMPS.MAX_TSS_CONNECTIONS
Use 2. Archive Timestamp Server usually is fast enough so that higher values do
not increase performance.
cert 2:
signer: /C=DE/O=IXOS Software AG/OU=Engineering SBL/CN=Root
Important
See “Password Security and Settings” below for additional information
on passwords.
Password You can specify a minimum length for passwords, if a user is locked out after
settings several unsuccessful logons and how long the lockout is to be.
Minimum length You can define a minimum character length for passwords. If you do not set this
for passwords property, the default value is eight.
Lock out after You can define that a user is locked out after a specified number of failed attempts
failed logons to log on; default is 0 (no lockout).
Note: The dsadmin user will never be locked out.
2. In the Properties window of the variable, change the Value as required (in
number of retries).
A value of 0 means that users will never be locked out.
3. Click OK and restart the Archive Spawner service.
4. Enter the following line (or modify it if present already):
=<number of failed attempts>
Unlock after You can define how long a user is locked out after a failed attempt; default is zero
failed logons seconds.
Note: The dsadmin user will never be locked out.
2. In the Properties window of the variable, change the Value as required (in
seconds).
A value of 0 means that users will never be locked out.
3. Click OK and restart the Archive Spawner service.
9.2 Concept
Modules To keep administrative effort as low as possible, the rights are combined in policies
and users are combined in user groups. The concept consists of three modules:
User groups
A user group is a set of users who have been granted the same rights. Users are
assigned to a user group as members. Policies are also assigned to a user group.
The rights defined in the policy apply to every member of the user group.
Users
A user is assigned to one or more user groups, and he is allowed to perform the
functions that are defined in the policies of these groups. It is not possible to
assign individual rights to individual users.
Policies
A policy is a set of rights, i.e. actions that a user with this policy is allowed to
carry out. You can define your own policies in addition to using predefined and
unmodifiable policies.
Standard users During the installation of Archive Server, some standard users, user groups and
policies are configured:
dsadmin in aradmins group
This is the administrator of the archive system. The group has the “ALL_ADMS”
policy and can perform all administration tasks, view accounting information,
and start/stop the Spawner. After installation, the password is empty, change it
as soon as possible; see “Creating and Modifying Users” on page 158.
Do not delete this user!
dpuser in dpusers group
This user controls the DocTools of the Document Pipelines. The group has the
“DPinfoDocToolAdministration” policy. The password is set by the dsadmin
user; see “Creating and Modifying Users” on page 158.
dpadmin in dpadmins group
This user controls the DocTools of the Document Pipelines and the documents in
the queues. The group has the “ALL_DPINFO” policy. The password is set by
the dsadmin user; see “Creating and Modifying Users” on page 158.
3. Create and configure the user group and add the users and the policies; see
“Checking, Creating and Modifying User Groups” on page 159.
Group Description
Archive Administra- Summary of rights to control creation, configuration and dele-
tion tion of logical archives.
Archive Users Summary of rights to control creation, configuration and dele-
tion of users and groups and their associated policies.
Notifications Summary of rights to control creation, configuration and dele-
tion of notifications and events.
Policies Summary of rights to control creation, configuration and dele-
tion of policies.
Important
Rights out of the following policy groups should no longer be used. These
rights are still available to ensure compatibility to policies created for former
versions of Archive Server.
• Accounting
• Administration Server
• DPinfo
• Scanning Client
• Spawner
Modifying a To modify a self-defined policy, select the policy in the top area of the result pane
policy and click Edit Policy in the action pane. Proceed in the same way as when creating a
new policy. The name of the policy cannot be changed.
Deleting a policy To delete a self-defined policy, select the policy in the top area of the result pane and
click Delete in the action pane. The rights themselves are not lost, only the set of
them that makes up the policy. Pre-defined policies cannot be deleted.
See also:
• “Checking, Creating and Modifying Users” on page 158
• “Checking, Creating and Modifying User Groups” on page 159
To create a user:
1. Select Users and Groups in the System object in the console tree.
2. Select the Users tab in the result pane. All available users are listed in the top
area of the result pane.
3. Click New User in the action pane. The window to create a new user opens.
4. Enter the user name and the password.
Username
Name of the user to administer the Archive Server. The name can be a
maximum of 14 characters in length. Spaces are not permitted. This name
cannot be changed subsequently.
Password
Password for the specified user.
Note: All printable ASCII characters are allowed within a password
except: “;”, “'” and “"”.
Confirm password
Enter exactly the same input as you have already entered under Password.
Click Next.
5. Select the groups the user should be assigned to. Click Finish.
Modifying user To modify a user's settings, select the user and click Properties in the action pane.
settings Proceed in the same way as when creating a new user. The name of the user cannot
be changed.
Deleting users To delete a user, select the user and click Delete in the action pane.
See also:
• “Creating and Modifying Policies” on page 157
• “Checking, Creating and Modifying User Groups” on page 159
• “Concept” on page 155
Name
A name that clearly identifies each user group. The name can be a maximum
of 14 characters in length. Spaces are not permitted.
Implicit
Implicit groups are used for the central administration of clients. If a group is
configured as implicit, all users are automatically members. If users who
have not been explicitly assigned to a user group log on to a client, they are
considered to be members of the implicit group and the client configuration
corresponding to the implicit group is used. If several implicit groups are
defined, the user at the client can select which profile is to be used.
5. Click Finish.
Modifying group To modify the settings of a group, select it and click Properties in the action pane.
settings Proceed in the same way as when creating a user group.
Deleting a user To delete a user group, select it and click Delete in the action pane. Neither users
group nor policies are lost, only the assignments are deleted.
See also:
• “Adding Users and Policies to a User Group” on page 160
• “Creating and Modifying Policies” on page 157
• “Checking, Creating and Modifying Users” on page 158
• “Concept” on page 155
Removing users To remove a user or a policy, select it in the bottom area and click Remove in the
and policies action pane.
Client
Three-digit number of the SAP client in which archiving occurs.
Feedback user
Feedback user in the SAP system. The cfbx process sends a notification
message back to this SAP user after a document has been archived using
asynchronous archiving. A separate feedback user (CPIC type) should be set
up in the SAP system for this purpose.
Password
Password for the SAP feedback user. This is entered, but not displayed,
when the SAP system is configured. The password for the feedback user
must be identical in the SAP system and in OpenText Administration Client.
Instance number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapdpxx service on the gateway server in order to
determine the number of the TCP/IP port (xx = instance number) being
used.
Codepage
Relevant only for languages which require a 16-bit character set for display
purposes or when different character set standards are employed in different
computer environments. A four-digit number specifies the type of character
set which is used by the RFCs. The default is 1100 for the 8-bit character set.
To determine the codepage of the SAP system, log into the SAPGUI and
select System > Status. If the SAP system uses another codepage, two
conversion files must be generated in SAP transaction sm59, one from the
SAP codepage to 1100 and the other in the opposite direction. Copy these
files to the Archive Server directory <OT config AS>/r3config and declare
the codepage number here in OpenText Administration Client. The cfbx
DocTool reads these files.
Language
Language of the SAP system; default is English. If the SAP system is
installed exclusively in another language, enter the SAP language code here.
Test Connection
Click this button to test the connection to the SAP system. A window opens
and shows the test result.
5. Click Finish.
Modifying SAP To modify a SAP system, select it in the SAP System Connections tab and click
system Properties in the action pane. Proceed in the same way as when creating a SAP
connections
system connection.
Deleting SAP To delete a SAP system, select it in the SAP System Connections tab and click
system Delete in the action pane.
connection
Testing a SAP To test a SAP connection, select it in the SAP System Connections tab and click Test
connection Connection in the action pane. A window opens and shows the test result.
Gateway number
Two-digit instance number for the SAP system. The value 00 is usually used
here. It is required for the sapgwxx service on the gateway server to
determine the number of the TCP/IP port (xx = instance number; e.g.,
instance number = 00, sapgw00, port 3300).
5. Click Finish.
Modifying SAP To modify a SAP gateway, select it in the SAP Gateways tab and click Properties in
gateways the action pane. Proceed in the same way as when creating a SAP gateway.
Deleting SAP To delete a SAP gateway, select it in the SAP Gateways tab and click Delete in the
gateways action pane.
Requirements:
• The gateway to the SAP system is created and configured; see “Creating and
Modifying SAP Gateways” on page 165.
• The SAP system is created and configured; see “Creating and Modifying SAP
System Connections” on page 163.
Protocol
Communication protocol between the SAP application and Archive Server.
Fully configured protocols, which can be transported in the SAP system, are
supplied with the SAP products of OpenText.
Use as default SAP system connection
Selects the SAP system to which the return message with the barcode and
document ID is sent in the “Late Storing with Barcode” scenario. This setting
is only relevant if the archive is configured on multiple SAP applications, e.g.
on a test and a production system.
6. Click Finish.
Modifying To modify an archive assignment, select it in the bottom area of the result pane and
archive click Properties in the action pane. Proceed in the same way as when assigning a
assignments
SAP system.
Removing To delete an archive assignment, select it in the bottom area of the result pane and
archive click Remove Assignment in the action pane.
assignments
PS_ENCODING_BASE64_UTF8N 1
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<name>
Late indexing to Process Inbox of TCP GUI
Archives the document to the Transactional Content Processing Servers and starts a process
with the document in the TCP GUI inbox. Documents are indexed in TCP.
DMS_Indexing n/a <processname> PS_MODE LEA_9_7_0
PS_ENCODING_BASE64_UTF8N 1
BIZ_REG_INDEXING
Leave the values empty
BIZ_APPLICATION<name>
BIZ_APPLICATION<name>
User:
key = BIZ_DOC_RT_USER
value = <domain>\<name>
User group:
key = BIZ_DOC_RT_GROUP
value = <domain>\<group>
Late indexing for plug-in event
Archives the document to the Transactional Content Processing Servers and calls a plug-in
event in the TCP Application Server. Documents are indexed in TCP.
DMS_Indexing PILE_INDEX n/a BIZ_ENCODING_BASE64_UTF8N
BIZ_APPLICATION<name>
BIZ_PLG_EVENT=<plugin>:<event>
Modifying an To modify the settings of an archive mode, select it in the Archive Modes tab in the
archive mode result pane and click Properties in the action pane. Proceed in the same way as
when adding an archive mode. For details, see “Archive Modes Properties” on
page 172.
Deleting an To delete an archive mode, select it in the Archive Modes tab in the result pane.
archive mode Click Delete in the action pane. If the archive mode is assigned to a scan host, it
must be removed first, see “Removing Assigned Archive Modes” on page 176.
See also:
• “Archive Modes Properties” on page 172
• “Scenarios and Archive Modes” on page 169
• “Adding a New Scan Host and Assigning Archive Modes” on page 174
Protocol
Protocol that is used for the communication with the pipeline host. For security
reasons, HTTPS is recommended.
Pipeline host
The computer where the Document Pipeline is installed.
Port
Port that is used for the communication with the pipeline host. Use 8080 for
HTTP or 8090 for HTTPS.
Advanced tab
Workflow
Name of the workflow that will be started in Enterprise Process Services when
the document is archived. For details concerning the creation of workflows, see
the Enterprise Process Services documentation.
Conditions
These archiving conditions are available:
R3EARLY
Early archiving with SAP.
BARCODE
If this option is activated, the document can only be archived if a barcode was
recognized. For Late Archiving, this is mandatory. For Early Archiving, the
behavior depends on your business process:
• If a barcode or index is required on every document, select the Barcode
condition. This makes sure that an index value is present before archiving.
The barcode is transferred to the leading application.
• If no barcode is needed, or it is not present on all documents, do not select
the Barcode condition. In this case, no barcode is transferred to the
leading application.
PILE_INDEX
Sorts the archived documents into piles for indexing according to certain
criteria. For example, the pile can be assigned to a document group, and the
access to a document pile in a leading application like Transactional Content
Processing can be restricted to a certain user group.
INDEXING
Indexing is done manually.
ENDORSER
Special setting for certain scanners. Only documents with a stamp are stored.
Extended Conditions
This table is used to hand over archiving conditions to the COMMANDS file, for
example, to provide the user name so that the information is sent to the correct
task inbox. The extended conditions are key-value pairs. Click Add to enter a
new condition. To modify a extended condition select it and click Edit. Click
Remove to delete the selected condition.
See also:
• “Adding and Modifying Archive Modes” on page 171
• “Adding a New Scan Host and Assigning Archive Modes” on page 174
See also:
• “Adding and Modifying Archive Modes” on page 171
• “Adding a New Scan Host and Assigning Archive Modes” on page 174
Deleting an To delete an archive mode, select it in the Archive Mode tab in the result pane. Click
archive mode Delete in the action pane. If the archive mode is assigned to a scan host, it must be
removed first, see “Adding a New Scan Host and Assigning Archive Modes” on
page 174.
See also:
• “Adding Additional Archive Modes” on page 175
• “Adding and Modifying Archive Modes” on page 171
• “Archive Modes Properties” on page 172
See also:
• “Adding and Modifying Archive Modes” on page 171
Example:
<host> = host03100
<port> = 8080
<secure port> = 8090
<context> = /archive
http://host03100:8080/archive?...
https://host03100:8090/archive?...
Modifying known To modify the settings of a known server, select it in the top area of the result pane
server settings and click Properties in the action pane. Proceed in the same way as when adding a
known server.
In a remote standby scenario, all new and modified documents are asynchronously
transmitted from the original archive to the replicated archive of a known server.
This is done by the Synchronize_Replicates job on the Remote Standby Server.
The job physically copies the data on the storage media between these two servers.
Therefore, the Remote Standby Server provides more data security than the local
backup of media.
With a Remote Standby Server, not the entire server is replicated but just the logical
archives. Further, it is possible to use two servers crosswise, i.e. one Archive Server
is the Remote Standby Server of the other and vice versa.
The Remote Standby Server has the following advantages:
• The availability of the archive increases, since the Remote Standby Server is
accessed when the original server is not available.
• Backup media are located in greater distance from the original Archive Server,
providing security in case of fire, earthquake and other catastrophes.
Nevertheless, there are also disadvantages:
• Only read access to the documents is possible; modifications to and archiving of
documents is not possible directly.
• A document may have been stored or modified on the original server, but not
yet transmitted to the Remote Standby Server.
• No minimization of downtime with regard to archiving new documents, since
only read access to the Remote Standby Server is possible.
Note: The usage of a Remote Standby Server depends on your backup strategy.
Contact OpenText Global Services for the development of a backup strategy
that fits your needs.
Important
These volumes have to be named the same way as the original volume. The
replicate volumes need at least the same amount of disk space.
See also:
• “Configuring Disk Volumes” on page 45
• “Installing and Configuring Storage Devices” on page 56
Disk volumes
a. Select the first missing volume and click Attach or Create Missing
Volume in the action pane.
b. Enter Mount Path and Device Type and click OK. Repeat this for every
missing volume.
ISO volumes
ISO volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also “ISO Volumes” on page 185).
a. Select Replicated Archives in the console tree and select the designated
archive.
b. Select a replicated pool in the console tree and click Properties in the
action pane.
c. Enter settings (see “Write At-Once Pool (ISO) Settings” on page 86) for
Number of Backups to n (n>0, for volumes on HDWO: n=1) and select
the Backup Jukebox.
d. Configure the Synchronize_Replicates job according to your needs
(see “Setting the Start Mode and Scheduling of Jobs” on page 100).
IXW volumes
IXW volumes will be replicated by the asynchronously running
Synchronize_Replicates job (see also “IXW Volumes” on page 186).
a. Select Replicated Archives in the console tree and select the designated
archive.
b. Select a replicated pool in the console tree and click Properties in the
action pane.
c. Enter settings (see “Write Incremental (IXW) Pool Settings” on page 88)
for Number of Backups to n (n>0) and select the Backup Jukebox.
d. Configure the Synchronize_Replicates job according to your needs
(see “Setting the Start Mode and Scheduling of Jobs” on page 100).
4. Schedule the replication job Synchronize_Replicates (see “Setting the Start
Mode and Scheduling of Jobs” on page 100).
Note: On the original Archive Server, the backup jobs can be disabled if no
additional backups should be written.
3. Select the disk buffer which needs to be replicated and click Replicate in the
action pane.
4. Enter the name of the disk buffer and click Next.
A message is shown, that the disk buffer gets replicated and a volume has to be
attached to this disk buffer.
5. Select Buffers in the Infrastructure object in the console tree.
6. Select the Replicated Disk Buffers tab in the result pane. The replicated buffers
are listed in the top area.
7. Select the replicated buffer in the top area. In the bottom area, the assigned
volumes are listed. Volumes which are not configured are labeled with the
missing type.
8. Select the first missing volume and click Attach or Create Missing Volume in
the action pane.
9. Enter Mount Path and click OK. Repeat this for every missing volume.
5. Remove the original volume and insert the replicate volume; see “To remove the
defective original volume and insert the replicate volume:” on page 188.
6. Update the new replicated volume; see “To update the new replicated volume:”
on page 189.
Note: For double-sided media, you have to execute the following steps for both
sides!
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.
4. Check whether the job run successfully (see “Checking the Execution of Jobs”
on page 101). If it was not possible to back up all data, break off here and contact
OpenText Customer Support.
4. Open a command line and determine the ID of the IXW (ISO) medium
(<WORM_ID>):
cdadm survey –v +sodi o=<ixwName>
Note: vid (option +i) is required later
5. Select the jukebox in Devices in the Infrastructure object in the console tree.
6. Select the designated volume and click Eject Volume in the action pane.
7. Remove the volume from the jukebox.
8. Export also the IXW (ISO) volume(s) from the STORM configuration.
a. In the command line, change to directory <OT install AS>\bin
b. Determine the ID of the IXW (ISO) medium:
cdadm survey -n +uoi
To remove the defective original volume and insert the replicate volume:
1. Log on to the original Archive Server.
2. Select the jukebox in Devices in the Infrastructure object in the console tree.
3. Select the defective volume in the bottom area of the result pane and click Eject
Volume in the action pane.
4. Remove the medium from the jukebox and label it as defective.
5. Insert the replicate IXW (ISO) medium and restore it as original:
a. Insert the replicate IXW (ISO) medium in the jukebox of the original Archive
Server.
b. Select the jukebox in Devices in the Infrastructure object in the console tree
and click Insert Volume in the action pane.
c. Select the medium (status bak) and select Restore in the action pane.
This makes the backup volume available as the original volume.
6. Select the designate archive in the console tree and the designated pool in the
result pane.
7. Select the backup volume in the bottom area of the result pane and select Clear
Backup Status in the action pane.
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.
4. Check whether the job run successfully (see “Checking the Execution of Jobs”
on page 101). If it was not possible to back up the data, break off here and
contact OpenText Customer Support.
2. Select the replicated archive in the console tree and the designated pool in result
pane.
3. Determine the name of the volume (<ixwName>) to be removed in the bottom
area of the result pane.
4. Open a command line and determine the ID of the IXW (ISO) medium
(<WORM_ID>):
cdadm survey –v +sodi o=<ixwName>
Note: vid (option +i) is required later
5. Select the jukebox in Devices in the Infrastructure object in the console tree.
6. Select the designated volume and click Eject Volume in the action pane.
7. Remove the volume from the jukebox.
8. Export also the IXW (ISO) volume(s) from the STORM configuration.
a. In the command line, change to directory <OT install>\bin
b. Determine the ID of the IXW (ISO) medium:
cdadm survey -n +uoi
Important
If this job is executed during office times, make sure there is enough
bandwidth between the original and remote standby server for the
replicated data available.
4. Check whether the job run successfully (see “Checking the Execution of Jobs”
on page 101). If it was not possible to back up the data, break off here and con-
tact OpenText Customer Support.
As the diagram hints, the Administration Server is central to the coordination of the
cache scenario at large. Administration Client is used to configure the settings of
each Archive Cache Server and the associated clients and archives.
Important
To ensure accurate retention handling, the clock of the Archive Cache Server
must be synchronized with the clock of the Archive Server.
Topic Description
Restrictions valid for “write back”
MTA documents MTA documents can be stored but the single document in an
MTA document cannot be accessed until they are transferred
to an Archive Cache Server.
Attribute Search Attribute Search in print lists is not available until the content
is transferred from an Archive Cache Server to the related Ar-
chive Server.
VerifySig The signature verification is processed for write back items
but the signer chain is not verified (no timestamp certificates
are available on related Archive Server).
Deletion behavior To avoid problems with deletion, do not use the following
archive settings:
• Original Archive > Properties > Security > Document
Deletion > Deletion is ignored (see also “Configuring the
Archive Security Settings” on page 79)
• Archive Server > Modify Operation Mode > Documents
cannot be deleted, no errors are returned (see also “Setting
the Operation Mode of Archive Server” on page 332
Retention behavior As long as write back documents are just stored on the
Archive Cache Server, there is no protection based on the
document retention. After transferring documents to a related
Archive Server, the retention behavior gets effective. If there is
no client retention, the retention setting of the logical archive
is used.
Versioning of compo- As long as components are just stored on the Archive Cache
nents Server, there is no version control! This means, after a success-
ful modification, the modified component is available, but the
version number will not be increment. A subsequent info call
still will deliver back version “1” of the just modified compo-
nent, until the component has been transferred to the related
Archive Server.
Topic Description
Transfer and commit Write-back documents are transferred to the related Archive
Server in a two-phase process:
Example:
<host> = csrv03100
<port> = 8080
<secure port> = 8090
<context> = /archive
http://csrv03100:8080/archive?...
https://csrv03100:8090/archive?...
4. Click Finish.
5. Configure the Copy_Back job. See also “Configuring Jobs and Checking Job
Protocol” on page 95 and Table 6-3 on page 97.
Note: Be aware that this job is disabled by default. If you intend to use the
"write back" mode, enable this job.
6. Click Finish. The new Archive Cache Server is added to the environment.
Next step:
• “Configuring Archive Access Via an Archive Cache Server” on page 204.
Caution
Do not modify the host name while writing back.
The following step ensures that pending write-back documents are
transferred to the related Archive Server. If this step fails, the Archive Cache
Server must not be deleted before the problem is solved.
Caution
This step ensures that pending write-back documents are transferred to
the related Archive Server. If this step fails, the Archive Cache Server
must not be deleted before the problem is solved.
To re-size volumes:
Caution
Danger of loss of data
Make sure not to accidently remove the write-back volume or to change the
path of the write-back volume. In case of questions, contact OpenText
Customer Support.
1. In Runtime and Core Services > Configuration, select the Content Service
object.
For re-sizing, select one the following variables:
• ACS size of write back volume in MB
or
• contentservice.SIZE<n>
Activating the Modifications of the volume size or adding new volumes must be activated before it
modification can be used. For activating, there are the following options:
• Cache server re-start and checking the volume size using the cscommand
command. This utility is provided in <OT config>\Runtime and Core Services
10.2.1\Workspace\contentservice directory.
User and user password of the respective Archive Server have to be applied.
The result is a list of all volumes, split into data volume and volume reserved
for internal attributes per volume.
Note: Re-sized volumes can be viewed only after restart of the server.
• Switching the maintenance mode on and off again.
See “Backup of Archive Cache Server Data” on page 248.
Note: The advantage of switching on/off the maintenance mode is that the
client does not receive errors because possibly incoming requests are
redirected.
Important
The subnet configuration will only be evaluated by clients using the
OpenText Archive Server API.
Note: Archive Cache Server keeps track of any relevant changes to the archive
settings and is synchronized automatically.
Cache server
The name of the Archive Cache Server assigned to this archive.
Caching enabled
If caching is enabled, one of the following modes can be set.
Write through
The Archive Cache Server will operate in “write through” mode for this
logical archive.
Write back
The Archive Cache Server will operate in “write back” mode for this
logical archive.
Note: If caching is disabled, the Archive Cache Server does not cache any
new documents for this logical archive. Instead, it acts as a proxy and
forwards all requests to Archive Server. Outstanding write-back
documents can still be retrieved.
5. Click Next and enter settings for subnet address and subnet mask/length.
The combination of subnet mask and subnet address specifies a subnet. Clients
residing in this subnet will use the selected Archive Cache Server. Typically, the
Archive Cache Server resides in the same subnet. It is possible to add more than
one subnet definition to an Archive Cache Server; see also “Subnet Assignment
of an Archive Cache Server” on page 203.
Several subnets
If a client belongs to more than one subnet, it will use the Archive Cache
Server that is assigned to the best matching subnet.
Subnet address
Specifies the address for the subnet in which a Archive Cache Server is
located. At least the first part of the address (e.g., NNN.0.0.0 in case of IPv4)
must be specified. A gateway must be established for each subnet.
IPv6
If you use IPv6, do not enclose the IPv6 address with square brackets.
Subnet mask / Length
Specifies the sections of the IP address that are evaluated. You can restrict
the evaluation to individual bits of the subnet address.
IPv4
Enter a subnet mask, for example 255.255.255.0.
IPv6
Enter the address length, i.e. the number of relevant bits, for example 64.
6. Click Finish to complete.
Modifying cache To modify the settings of an Archive Cache Server, select it in the top area of the
server settings result pane and click Properties in the action pane. Proceed in the same way as
when configuring an Archive Cache Server.
2. Select the logical archive which the Archive Cache Server assigned to.
3. Select the Cache Servers tab in the top area of the result pane and select the
Archive Cache Server. In the bottom area, the subnet definitions are listed.
4. Select the subnet definitions in the bottom area of the result pane and click
Properties.
Modify the settings for subnet mask and subnet address. See also “Configuring
Archive Access Via an Archive Cache Server” on page 204
5. Click Finish.
In Runtime and Core Services > Configuration, select the Content Service
object.
2. Click New Property in the action pane.
3. Enter the property name: contentservice.DSHOST1
4. Select Global as Scope and String as Datatype.
5. Click Next.
6. Enter the value: <name of 2nd AS> and check Requires Restart?.
7. Click Next and then Finish to resume.
8. For each additional Archive Server, add another entry.
For example, for the next Archive Server, choose the following property name:
contentservice.DSHOST2
Note: The property names for Archive Server must be administrated into
ascending order.
To generate a report:
1. Select Reports in the System object in the console tree.
2. Select the Scenarios tab in the top area of the result pane.
3. Select the scenario for which you want to generate a report.
Currently only the reportArchive scenario is available.
4. Select the Run Scenario... action.
The resulting report is stored as HTML file and can be displayed in a standard
browser; see the “To display a report:” on page 210 procedure.
Information The following information per report is displayed in the result pane:
about a report
Name Name of the report. The name is predefined, it is derived from the respective sce-
nario name extended by a serial number.
Date Date and time when the report was generated.
Format YYYY-MM-DD HH:MM:SS.
Deleting reports To delete a report, select it and click Delete in the action pane. Confirm the
displayed message with OK.
To display a report:
1. Select Reports in the System object in the console tree.
2. Select the Reports tab in the top area of the result pane.
3. Select the Refresh action.
4. Select a report in the Reports tab.
5. Select the Open Report... action.
The result HTML file can be displayed using your standard browser.
report Generates a report comprising details for all archives (Original Archives,
Archive Replicated Archives and External Archives) currently on the Archive
Server. These details include:
• Security
• Settings
• Retention
• Timestamps
• Pools, if defined
Resetting to To reset a value to its default value, select it and click Reset to Default in the action
default value pane. This action is sensitive only if the value is currently not the default value.
Confirm confirmation dialog with OK.
Retrieving In the list of configuration variables, undefined values are marked with *** Value
unspecified not defined ***. In the properties window, undefined values are marked with an
values
icon:
Example:
If you enter port, the result, among others, can be the following:
• Port of the Archive Server – AS_HTTP__PORT
• Server Port for RPC requests – SERVER_PORT
Note: Click on the arrow icon to the right of the search icon (see figure
below) and select Search All Configuration Variables to display all
configuration variables.
1 Deletion of components works differently: If the storage system cannot delete a component physically, the component
remains, it is not deleted logically.
Important
To ensure correct deletion, you must synchronize the clocks of the Archive
Serverr and the storage subsystem, including the devices for replication.
Storage Pool type Delete from Delete content physically Destroy con-
mode archive DB tent
Single file HDSK x x x (Destroy un-
storage recoverable)
FS and VI x x —
Container ISO, IXW x Delete volume, when the x (destroy me-
file stor- on optical last document is deleted: dia)
age media Delete_Empty_Volumes job
Storage Pool type Delete from Delete content physically Destroy con-
mode archive DB tent
ISO on x Delete volume, when the —
storage last document is deleted:
system Delete_Empty_Volumes job
Notes:
• Not all storage systems release the space of the deleted volumes (see
documentation for your storage system).
• Blobs are handled like container file archiving.
Delete volumes which have not been modified since days variable
(internal name: ADMS_DEL_VOL_NOT_MODIFIED_SINCE_DAYS)
Delete volumes which are more than percent full variable
(internal name: ADMS_DEL_VOL_AT_LEAST_FULL)
and avoid that new, empty volumes can be deleted.
Important
On double-sided media, check that both volumes are deleted.
b. Select the designated jukebox in the top area of the console tree. Check the
volume list in the bottom area of the result pane for volumes with the name
XXXX.
c. Select the XXXX volume and click Eject Volume in the action pane.
d. Destroy the medium physically.
For IXW media (WORM or UDO), consider the finalization status. When non-
finalized IXW volumes are exported, the document information is deleted from the
database but the file system information (inode and hashfiles) are not updated.
Therefore, we recommend finalizing IXW volumes before export.
Important
• Each side of a double-sided optical medium (WORM, UDO or DVD)
constitutes a volume. Export both volumes before you remove the
medium from the jukebox.
• Do not use the Export utility for volumes belonging to archives that are
configured for single instance archiving (SIA). A SIA reference to a
document may be created long after the document itself has been stored;
the reference is stored on a newer medium than the document. SIA
documents can be exported only when all references are outdated but the
Export utility does not analyze references to the documents.
• Volumes containing at least one document with non expired retention
are not exported.
To export volumes:
1. If the optical medium is not in the jukebox, insert it.
2. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
3. Select the Export Volumes utility.
4. Click Run in the action pane.
5. Enter the export parameters.
Volume name(s)
Name of the volumes(s) to be exported. You can use wildcards to export
multiple volumes at the same time.
Export from database
Enable this option when you export a defective volume. It causes the
database to be searched for entries for this volume, and the entries relating to
the contents of the volume are deleted. The volume itself is not accessed.
If this option is disabled, the command searches the volume directly and
deletes the associated entries from the database. Intact volumes that are no
longer needed are exported in this way. The volume must be in the jukebox.
6. Click Run. A protocol window shows the progress and the result of the export.
The export process can take some time.
7. If the medium is a double-sided optical one, export the second volume in the
same way.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
Arguments
Additional arguments. Not required for normal import, only for special tasks
like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
Base directory
Mount path of the volume.
Backup
The volume is imported as a backup volume and entered in the list of
volumes as a backup type.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional Arguments. Not required for normal import, only for special
tasks like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the FS or HDSK pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the pool.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
Base directory
Mount path of the volume.
Read-only
The volume is imported as a write-protected volume.
Arguments
Additional arguments. Not required for normal import, only for special tasks
like moving documents to another logical archive. Contact OpenText
Customer Support.
4. Click Run.
The import process can take some time. A message box shows the progress of
the import.
5. Select Original Archives in the Archives object in the console tree.
6. Select the designated archive and the VI pool.
7. Click Attach Volume in the action pane.
8. Select the volume and define the priority.
9. Click Finish to attach the imported volume to the VI pool.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact OpenText Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
To check a document:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Check Document utility.
3. Click Run in the action pane.
4. Enter the document ID, the type and select whether the document should be
repaired.
DocID
Type the document ID accordingly to the Type setting.
You can determine the string form of the document ID by searching for the
document in the application (e.g. on document type and object type) and
displaying the document information in Windows Viewer or in Java Viewer.
Type
Select the type of document ID. The ID can be entered in numerical (Number)
or string (String) form.
Repair document, if needed
Check this box if you want to repair defective documents. The utility at-
tempts to copy the document from another volume. If this option is deacti-
vated, the utility simply performs the test and displays the result.
Important
Use this repair option only if you are sure that you do not need the
missing documents any longer! You may lose references to
document components that are still stored somewhere in the archive.
If in doubt, contact OpenText Customer Support.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
To check a volume:
1. Select Utilities in the System object in the console tree. All available utilities are
listed in the top area of the result pane.
2. Select the Check Volume utility.
3. Click Run in the action pane.
4. Enter the name of the volume.
5. Click Run.
A protocol window shows the progress and the result of the check.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
backup must be written by one of the backup jobs. The pool configuration for the
backup jobs is:
Number of Partitions 1
Number of Backups 1
You can enable automatic finalization and set the conditions either when creating
the pool or at a later time.
See also:
• “Manually Finalizing IXW Volumes” on page 234
See also:
• “Checking Utilities Protocols” on page 252
• “Checking the Finalization Status” on page 235
• “Automatic Finalization of IXW Volumes” on page 233
• “Manually Finalizing IXW Pools” on page 234
See also:
• “Checking Utilities Protocols” on page 252
• “Checking the Finalization Status” on page 235
• “Manually Finalizing IXW Volumes” on page 234
• “Automatic Finalization of IXW Volumes” on page 233
See also:
• “Setting the Finalization Status Manually” on page 235
• “Manually Finalizing IXW Volumes” on page 234
• “Automatic Finalization of IXW Volumes” on page 233
“Checking the Finalization Status” on page 235). If finalization has failed several
times and you no longer want to repeat it, you can set the error status for that
volume to fin_err to indicate that the volume cannot be finalized. This error status
cannot be removed later.
Note: The failure of the finalization does not affect the security of the data on
the medium!
See also:
• “Checking Utilities Protocols” on page 252
• “Checking the Finalization Status” on page 235
• “Manually Finalizing IXW Volumes” on page 234
• “Automatic Finalization of IXW Volumes” on page 233
4. Select the new ISO volume and click Eject Volume in the action pane.
5. Label the ISO medium.
Do not use solvent-based pens or stickers. Never use a ballpoint pen or any
other sharp object to label your discs. The safest area for a label is within the
center stacking ring. If you use adhesive labels, make sure that they are attached
accurately and smoothly.
6. Remove and label all the new ISO media in this way.
7. Re-insert one of each set of identically named ISO media. To do this, select the
ISO jukebox in the top area of the result pane and click Insert Volume in the
action pane.
8. Remove all defective ISO media with the name --bad--. Label these as
defective. They must not be re-used.
9. Store the backup ISO media in a safe place.
Note: Perform these tasks also for the jukeboxes of the remote standby server.
To remove a volume:
1. Select Devices in the Infrastructure object in the console tree.
2. Select the jukebox from which you want to remove a volume in the top area of
the result pane.
3. Select the volume in the bottom area of the result pane and click Eject Volume
in the action pane.
4. Remove the backup volume in the same way.
Important
You can also use a Remote Standby Server for backing up data. For details
refer to “Configuring Remote Standby Scenarios” on page 181.
Notes:
• The Local_Backup job considers all pools, for which the Backup option is
set. The backup_pool job considers only the pool for which it is created.
You can schedule additional backups of a pool by configuring both jobs, or
configure the pool backup separately.
• If problems occur, have a look in the protocol of the relevant job (see
“Checking the Execution of Jobs” on page 101).
3. Select the damaged volume in the bottom area of the result pane and click Eject
Volume in the action pane.
4. Insert the backup copy in the jukebox and click Insert Volume in the action
pane. It is now used as the original ISO volume without any further
configuration.
5. Select Original Archives in the Archives object in the console tree.
6. Select the original archive in which the volume is used.
7. Select the pool in the top area and the volume in the bottom area of the result
pane.
8. Click Backup Volume in the action pane.
9. Click OK to start the backup.
A protocol window shows the progress and the result of the backup. To check
the protocol later on, see “Checking Utilities Protocols” on page 252.
The volume list now contains a volume of the backup type and the same name
as the original volume.
10. Check the columns Unsaved (MB) and Last Backup/Replication:
The Unsaved (MB) column should now be blank, indicating that there is no
more data on the original volume that has not been backed up. The Last
Backup/Replication column shows the date and time of the last backup. The
Host column indicates the server where the backup resides.
Semi-automatic backup
With this method, you initialize the original and backup volumes manually in
the corresponding jukebox devices. The backup volume must have the same
name as the original one. To initialize the volume, proceed as described in
“Manual Initialization of Original Volumes” on page 61. The configuration
procedure is the same as for automatic backup except for steps 5 and 6 which
are here: No Auto Initialization, no Number of Backups and no Backup Ju-
kebox selection. The backup job finds the backup volumes by their names.
Manual backup If the original or backup medium is damaged, it is necessary to create a new backup
of one volume medium manually. If the damaged medium is a double-sided one, initialize and
backup both sides of the medium.
13. For double-sided media, backup the second side of the medium in the same
way.
5. Click Restore Volume in the action pane. This makes the backup volume
available as original. If a volume has already been written to the second side of
the defective IXW medium, restore it in exactly the same way.
6. Create a new backup volume (see “Manual backup of one volume” on
page 241).
Note: If an IXW backup volume is damaged, remove the medium with Eject
and create a new backup volume (see “Manual backup of one volume” on
page 241).
There are several parts that have to be protected against data loss:
Volumes
All hard-disk volumes that can hold the only instance of a document must be
protected against data loss by RAID. Which volumes have to be protected, you
find in the “Installation overview” chapter of the installation guides for Archive
Server.
OpenText Document Pipelines
The Document Pipeline of OpenText Imaging Enterprise Scan has to be protected
against data loss; for details, see section 18.2 "Backing up the Document Pipeline
directory" in Open Text Imaging Enterprise Scan - User and Administration Guide
(CLES-UGD).
Database
The database with the configuration for logical archives, pools, jobs and relations
to other Archive Servers and leading applications has to be protected against
data loss. The process depends on the type of database you are using (see
“Backup of the Database” on page 246).
Optical media
Optical storage media have to be protected against data loss. The process differs
if you use ISO or IXW media (see “Backup and Recovery of Optical Media” on
page 237).
Storage Manager configuration
The IXW file system information and the configuration of the Storage Manager
must be saved; see “Backing Up and Restoring of the Storage Manager
Configuration” on page 247.
Data in storage systems
Data that is archived on storage systems like HSM, NAS, CAS needs also a
backup, either by means of the storage system or with Archive Server tools; see
“Backup for Storage Systems” on page 231.
Archive Cache Server
If “write back” mode is enabled, the Archive Cache Server stores newly created
documents locally without saving them immediately to the destination. It is
recommended to perform regular backups of the Archive Cache Server data; see
“Backup and Recovery of an Archive Cache Server” on page 248.
Important
During the configuration phase of installation, you can either select default
values for the database configuration or configure all relevant values. To
make sure that this guide remains easy to follow, the default values are used
below. If you configured the database with non-default values, replace these
defaults with your values.
Caution
If “write back” mode is enabled, the Archive Cache Server stores newly
created documents locally without saving them immediately to the
destination. This means that “highly critical” data are hold on the local disk
of the related Archive Server. For security reasons, OpenText strongly
recommends storing data on a RAID system. For performing regular
backups of Archive Cache Server data, you should include relevant items in
your backup.
cscommand utility With the Archive Cache Server installation comes a small utility (cscommand), which
allows to activate or deactivate the maintenance mode. The commands to activate
and deactivate maintenance mode can be called from any script or batch file.
Usually the commands are added to the script that controls your backup. You can
find cscommand in the ProgramData\Runtime and Core Services 10.2.1\-
Workspace\contentservice folder (Windows) or
/opentext/rcs/workspace/contentservice directory (Unix).
3. Start your backup. Be sure that all relevant directories are included.
4. Deactivate maintenance mode:
cscommand -c setOnline -u <username> -p <password>
Cache volumes One or more cache volumes to be used for write through caching. Not
highly critical but useful for reducing time to rebuild cached data.
Write-back vol- One single cache volume to be used for write back caching. This
ume volume contains the following subdirectories:
dat
Components are stored here.
idx
Per document, additional information is stored, which contains all
necessary information to reconstruct the data in case of a crash.
log
Special protocol files (one per day) are stored here. Containing
relevant info when a document is transferred to and committed by
the Document Service.
Path to store da- The absolute path to the volume where the Archive Cache Server
tabase files stores its metadata for the cached documents. Necessary to recover.
2. If the write-back volume is still available, rename the root directory of the write-
back volume (see step 5, <location of write back data>).
3. Copy your backup of the data to the correct location to replace the corrupt one.
If you have also a partial loss of data volumes, copy the lost data from your
backup to the correct location.
4. Activate consistency check. Use
cscommand –c checkVolume -u <username> -p <password>
Important
Each successfully recovered document is listed on the command line
and removed from <location of write back data>. This means that
the recover operation can just be processed once.
6. If you do not get any error messages, the renamed directory (<location of
write back data>) can be deleted. Any data left in this subtree is no longer
needed for operation.
Important
If you get error messages, do not delete any data. If you cannot fix the
problem, contact OpenText Customer Support.
Utility Link
Check Database Against Volume “Checking Database Against Volume” on
page 227
Check Document “Checking a Document” on page 228
Check Volume “Checking a Volume” on page 230
Check Volume Against Database “Checking Volume Against Database” on
page 228
Compare Backup WORMs “Comparing Backup and Original IXW Volume”
on page 231
Count Documents/Components “Counting Documents and Components in a Vo-
lume” on page 229
Export Volumes “Exporting Volumes” on page 220
Import GS Volume “Importing GS Volumes for Single File (VI) Pool”
on page 225
Import HD Volume “Importing Hard-Disk Volumes” on page 224
Import ISO Volume “Importing ISO Volumes” on page 222
Import IXW Or Finalized Volume “Importing Finalized and Non-Finalized IXW
Volumes” on page 223
Utility Link
View Installed Archive Server “Viewing Installed Archive Server Patches” on
Patches page 325
VolMig Cancel Migration Job “Canceling a Migration Job” on page 282
VolMig Continue Migration Job “Continuing a Migration Job” on page 281
VolMig Fast Migration of ISO Vol- “Creating a Local Fast Migration Job for ISO Vol-
ume umes” on page 272
VolMig Fast Migration of remote “Creating a Remote Fast Migration Job for ISO
ISO Volume Volumes” on page 273
VolMig Migrate Components on Vo- “Creating a Local Migration Job” on page 267
lume
VolMig Migrate Remote Volumes “Creating a Remote Migration Job” on page 270
VolMig Pause Migration Job “Pausing a Migration Job” on page 281
VolMig Renew Migration Job “Renewing a Migration Job” on page 282
VolMig Status “Monitoring the Migration Progress” on page 277
2. Select the Utilities tab in the top area of the result pane. All available utilities are
listed in the top area of the result pane.
3. Select the utility you want to check.
The latest message of the utility is listed in the bottom area of the result pane.
4. Select the Results tab in the bottom area of the result pane to check whether the
execution of the utility was successful
or
select the Message tab in the bottom area of the result pane to check the
messages created during execution of the utility.
To clear protocols:
1. Select Utilities in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane.
3. Click Clear Protocol in the action pane.
All protocol entries are deleted.
Re-reading Utilities and jobs are read by Archive Server during the startup of the server. If
scripts utilities or jobs are added or modified, they can be re-read. This avoids a restart of
Archive Server.
To re-read scripts:
1. Select Utilities in the System object in the console tree.
2. Select the Protocol tab in the top area of the result pane.
3. Click Reread Scripts in the action pane.
• Compression, encryption
Compression and/or encryption of documents before they are written to new
media.
• Retention
Setting of a retention period for documents during the migration process.
• Automatic Verification
Verifying of all migrated documents. A verification strategy can be defined for
each volume, specifying the verification procedure. Timestamps or different
checksums can be selected as well as a binary comparison.
21.2 Restrictions
The following restrictions are valid for the volume migration features:
• Remote single-file
Remote migration is only possible for volumes that are handled by STORM and
that can be mounted via NFS. Single-File volumes like HSM or HD volumes
cannot be migrated from a remote Archive Server.
• DBMS provider
Remote migration is only possible if the remote Archive Server uses the same
DBMS provider as the local Archive Server. For a cross-provider migration
setup, contact OpenText Services.
• Fast migration of ISO images
It is not possible to filter components. Everything is copied regardless whether it
is very new, very old or has been deleted logically. No changes are possible on
the documents, i.e. documents cannot be compressed, decompressed or
encrypted. Also, retention periods cannot be applied. This holds for local and
remote Fast Migrations.
Caution
Consider that replication and backup settings are not transferred to the
target archive during migration. Therefore, the configuration for backup and
replicated archives must be performed for the migrated archive again. See
“Configuring Remote Standby Scenarios” on page 181 and “Creating and
Modifying Pools” on page 84.
Preconditions
• The hostname of the “old” server is supposed to be oldarchive. The volumes to
be migrated are located on oldarchive. The volumes of the oldarchive are
listed in Devices in the Infrastructure object of the console tree. This server is
also called “remote server”.
• The hostname of the new Archive Server (destination of migration) is supposed
to be newarchive. The target devices for remote migration are located on
newarchive. This server is also called “ local server”.
3. For Oracle only: On the local server, extend the $TNS_ADMIN/tnsnames.ora file
to contain a section for the remote computer.
4. The actual read access of the media is done via NFSSERVERs. To add access to
oldarchive media, set the respective variabel: in Configuration, search for the
NFS Server n variable (internal name: NFSSERVERN; see “Searching
Configuration Variables” on page 212; on the local server newarchive). Add an
entry for each NFSSERVER on the remote computer (at least for those that you
intend to read from). This will create access to the media on oldarchive.
6. For the newarchive, select Configuration > Archive Server in the Runtime and
Core Services object in the console tree.
7. Search for the variable in Configuration (see “Searching Configuration
Variables” on page 212). Add the List of mappings from remote NFSSERVER
names to local names (internal name: NFSMAP_LIST) variable/property. For
each remote NFSSERVER to read from, add an entry. The syntax is:
<remote server>:<remote NFSSERVER>:local:<local NFSSERVER alias>
The entrylocal is fixed syntax; it is not the name of the local server!
Character Description
* Wildcard: 0 to n arbitrary characters
e.g. vol5*, matches all volumes that name begins with vol5, e.g. vol5a,
vol5c78, vol52e4r
Target archive
Enter the target archive name.
Target pool
Enter the target pool name.
Migrate only components that were archived: On date or after
You can restrict the migration operation to components that were archived after
or on a given date. Specify the date here. The specified day is included.
Migrate only components that were archived: Before date
You can restrict the migration operation to components that were archived
before a given date. Specify the date here. The specified day is excluded.
Set retention in days
Enter the retention period in days. With this entry, you can change the retention
period that was set during archiving. The new retention period is added to the
archiving date of the document. The following settings are possible:
• >0 (days)
• 0 (none)
• -1 (infinite)
• -6 (archive default)
• -8 (keep old value)
• -9 (event)
Note: The retention date of migrated documents can only be kept or extended.
The following table provides allowed settings:
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes:
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying Attributes of
a Migration Job” on page 285 (-v parameter).
Additional arguments
-e
Export source volumes after successful migration.
-k
Keep exported volume (export only the document entries, allow dsPurgeVol
to destroy this medium).
-i
Migrate only latest version, ignore older versions.
-A <archive>
Migrate components only from a certain archive.
Character Description
[] Specifies a set of volume names:
• “[ ]” can be used only once
• “,” can be used to separate numbers
• “-” can be used to specify a range
e.g. [001,005-099]
Verification mode
Select the verification mode that should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes:
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying Attributes of
a Migration Job” on page 285 (-v parameter).
Additional arguments
-i
Migrates only latest version, ignores older versions.
-A <archive>
Migrates components only from a certain archive.
4. Enter appropriate settings to all fields (see “Settings for remote fast migration”
on page 274). Click Run.
Verification mode
Select the verification mode which should be applied for volume migration. The
following settings are possible:
• None
• Timestamp
• Checksum
• Binary Compare
• Timestamp or Checksum
• Timestamp or Binary Compare
• Checksum or Binary Compare
• Timestamp or Checksum or Binary Compare
Notes:
• Many documents (including all BLOB documents) do not have a checksum
or a timestamp. When migrating a volume that contains such documents or
BLOBs, it is strictly recommended to select a mode that provides “binary
compare” as a last alternative.
• If a migration job cannot be finished because the source volume contains
documents that cannot be verified using the specified verification methods,
it is possible to change the verification mode. See “Modifying Attributes of
a Migration Job” on page 285 (-v parameter).
Additional arguments
-d (dumb mode)
Import of document/component entries into local DB by dsTools instead of
reading directly from the remote DB. The dumb mode disables automatic
verification. Archive- and retention settings cannot be changed.
-A <archive>
Migrates components only from a certain archive. Does not work with dumb
mode (–d ).
5. Enter the ID of the migration job that you want to continue in the Migration Job
ID(s) field.
6. Click Run.
A protocol window shows the progress and the result of the migration. The
migration job is set back to the status before it has been paused or the error
occurred.
6. Click Run.
A protocol window shows the progress and the result of the migration. The
migration job is set to the New status and is started from the beginning.
jobID
The ID of the migration job to be deleted.
jobID
The ID of the migration job to be finished.
jobID
The ID of the migration job to be modified.
attribute
The attributes which can be modified.
Note: Attributes with one hyphen (-) will be added/updated.
Attributes with two hyphens (--) will be removed.
-e (export)
Export source volumes after successful migration.
-k (keep)
Do not set the exported flag for the volume (so dsPurgeVol can destroy it).
-i (ignore old versions)
Migrate only the latest version of each component, ignore older versions.
-r <value> (retention)
Set a new value for the retention of the migrated documents.
Not supported in Fast Migration scenarios.
-v <value> (verification level)
Define how components should be verified by VolMig.
old poolname
Is constructed by concatenating the source archive name, an underscore
character and the source pool name, e.g. H4_worm.
new poolname
Is constructed by concatenating the target archive name, an underscore character
and the target pool name, e.g. H4_iso.
-d
Update pools in ds_job only.
-v
Update pools in both, ds_job and vmig_jobs.
Note: This works only for local migration scenarios. Write jobs in a remote
migration environment remain on the remote server and cannot be moved to
the local machine.
jobID
The ID of the migration job which components should be listed.
max results
How many components should be listed at most.
archive
The archive name.
pool 1
Name of the first pool.
pool 2
Name of the second pool.
archive
The archive name.
pool
The pool name.
sequence number
New number of the sequence.
sequence letter
New letter (for ISO pools only).
volume name
Name of the primary volume.
output file
File to write the output to instead of stdout.
Modifying event To modify an event filter, select it in the top area of the result pane and click
filters Properties in the action pane. Proceed in the same way as when creating a new
event filter. The name of the event filter cannot be changed.
Deleting event To delete an event filter, select it in the top area of the result pane and click Delete in
filters the action pane.
See also:
• “Conditions for Event Filters” on page 294
• “Available Event Filters” on page 296
• “Creating and Modifying Notifications” on page 297
• “Checking Alerts” on page 301
See also:
• “Creating and Modifying Event Filters” on page 293
• “Available Event Filters” on page 296
• “Creating and Modifying Notifications” on page 297
• “Checking Alerts” on page 301
Severity: Error
Message class: Server or <any>
Component: Monitor Server
Message code: -
See also:
• “Conditions for Event Filters” on page 294
• “Creating and Modifying Notifications” on page 297
• “Checking Alerts” on page 301
To create a notification:
1. Select Events and Notifications in the System object in the console tree.
2. Select the Notifications tab. All available notifications are listed in the top area
of the result pane.
3. Click New Notification in the action pane. The wizard to create a new
notification opens.
4. Enter the name and the type of the notification and click Next. Enter the
additional settings for the new notification event. See “Notification Settings” on
page 298.
5. Click OK. The new notification is created.
6. Select the new notification in the top area of the result pane.
7. Click Add Event Filter in the action pane. A window with available event filters
opens.
8. Select the event filters which should be assigned to the notification and click
OK.
• Select the new notification in the top area of the result pane and click Test in the
action pane.
• Click the Test button in the notification window while creating or modifying no-
tifications.
Modifying To modify the notification settings, select the notification in the top area of the result
notifications pane and click Edit in the action pane. Proceed in the same way as when creating a
settings
new notification. The name of the notification cannot be changed.
Deleting To delete a notification, select the notification in the top area of the result pane and
notifications click Delete in the action pane.
Adding event To add event filters, select the notification in the top area of the result pane. Click
filters Add Event Filter in the action pane. Proceed in the same way as when creating a
new notification.
Remove an To remove an event filter, select it in the bottom area of the result pane and click
event filter Remove in the action pane. The notification events are not lost, only the assignments
is deleted.
See also:
• “Notification Settings” on page 298
• “Using Variables in Notifications” on page 300
• “Checking Alerts” on page 301
Notification Type
Select the type of notification and enter the specific settings. The following
notification types and settings are possible:
Alert
Alerts are notifications, which can be checked by using Administration
Client. They are displayed in Alerts in the System object in the console tree
(see “Checking Alerts” on page 301).
Mail Message
Emails can be sent to respond immediately to an event or in standby time. If
you want to send it via SMS, consider that the length of SMS text (includes
Subject and Additional text) is limited by most providers. Enter the following
additional settings:
• Sender address: Email address of the sender. It appears in the from field
in the inbox of the recipient. The entry is mandatory.
• Mail host: Name of the target mail server. The mail server is connected
via SMTP. The entry is mandatory.
• Recipient address: Email address of the recipient. If you want to specify
more than one recipient, separate them by a semicolon. The entry is
mandatory.
• Subject of the mail, $ variables can be used (see “Using Variables in
Notifications” on page 300). If not specified, the subject is $SEVERITY
message from $HOSTNAME/$USERNAME($TIME).
Text
Free text field with the maximum length of 255 characters. $ variables can be
used (see “Using Variables in Notifications” on page 300).
Active Period
Weekdays and time of the day at which the notification is to be sent.
See also:
• “Creating and Modifying Notifications” on page 297
• “Using Variables in Notifications” on page 300
• “Checking Alerts” on page 301
See also:
• “Notification Settings” on page 298
To check alerts:
1. Select Alerts in the System object in the console tree. All notifications of the alert
type are listed in the top area of the result pane.
2. Select the alert to be checked in the top area of the result pane. Alert details are
displayed in the bottom area of the result pane. The yellow icon of the alert
entry turns to grey if read.
Marking To mark all messages as read click Mark All as Read in the action pane. The yellow
messages as icons of the alert entries turn to grey.
read
<port> Port at which Archive Monitoring Server receives HTTP: 8080, HTTPS:
requests 8090
Example: http://alpha.opentext.com:8080/w3monc/index.html
Calling this URL opens the Server start page.
You can specify a number of parameters with the URL to customize Archive
Monitoring Web Client to meet your requirements (see “Customizing Archive
Monitoring Web Client” on page 307).
Title bar
The title bar contains the name of the monitored Archive Server and also
specifies the Web browser you are using.
Button bar
The button bar contains buttons to configure Archive Monitoring Web Client. All
these settings apply only to the current browser session. If you want to reuse
your settings, pass them as parameters when you start the program (see
“Customizing Archive Monitoring Web Client” on page 307).
Left column: monitored servers
Here you find a list of the monitored Archive Servers. Click a name. The current
status of this Archive Server is displayed in the other two columns. If you click
the name again, the status is checked at Archive Monitoring Server and the
display in Archive Monitoring Web Client is updated if needed.
Otherwise, the status of the components is updated after the specified refresh
interval (see “Setting the Refresh Interval” on page 306). If it is not possible to
establish a connection to a Web server, then the icon is displayed in front of
the server name.
Tip: If you want to compare the status of different servers, open Archive
Monitoring Web Client for each of them and use the task bar to switch
between the different instances.
Middle column: components
In a hierarchical structure, you see the groups of components that run on the
interrogated host. Below each component group, you see the associated
components. Click a component to display its current status in the right column.
Click the icon to display the status of the component group on the right. For
information on the components and the possible messages, refer to “Component
Status Display” on page 308.
The icon in front of the component group name represents a summary of the
individual statuses of the components in the group. If you move the mouse
pointer to an icon in front of a component, abbreviated status information is
displayed in a tool tip even if the detailed information is not displayed in the
third column. In this way, you can compare the statuses of two components.
Right column: detailed information and status
This column contains detailed status information on the selected components or
component groups. If the right column is too narrow to display the information,
move the mouse pointer to the icon to display the status information in a tooltip.
Status line
Provides information on the status of the initiated processes.
Status icons The icons identify the system status at a glance. To configure the icons, see
“Configuring the Icon Type” on page 307. The possible statuses are:
• Available without restriction
• Warning, storage space problems are imminent. You can continue working for
the present but the problem must be resolved soon
• Error, component not available
In the above figure, the Basic icon set was used as Monitor symbols.
The Error and Warning status is also displayed for the higher-level component
group and for the host, that is to say the problem is graphically escalated to a higher
level. In this way, you can identify problems even if the particular branch of the
hierarchy is closed.
Configuration file The configuration of Archive Monitoring Web Client is saved in the *.monitor files
that are located in the directory <OT install AS>\config\monitor.
Note: To refresh the display of the host status manually, click the name of the
host in the left column. In the Internet Explorer, you can also refresh the
display with F5 or CRTL+R.
3. Click OK. The selected Archive Server is entered in the list of hosts.
To remove a host:
1. In the Archive Monitoring Web Client window, click Remove Hosts.
2. Select one or more Archive Servers that you no longer want to monitor.
3. Click OK. The selected Archive Server is removed from the host list.
Save this URL as a bookmark. So you can always start your personal configuration.
If you do not pass any parameters with the URL, Archive Monitoring Web Client
starts with the default settings: LEDs, refresh interval 120 seconds and no additional
hosts.
30.2.1 DP Space
Monitors the storage space for the Document Pipelines that are used for the
temporary storage of documents during the archiving process. A special directory
on the hard disk is reserved for the Document Pipelines. You can determine its
location in Configuration in Administration Client (see “Searching Configuration
Variables” on page 212).
During archiving, the documents are temporarily copied to this directory and are
then deleted once they have been successfully saved. The directory must be large
enough to accommodate the largest documents, e.g., print lists generated by SAP.
The status can be Ok, Warning and Error.
In Details you can see the free storage space in MB, the total storage space in MB
and the proportion of free storage space in percent. The values refer to the hard-disk
volume in which the DPDIR directory was installed. A warning or error message is
issued if insufficient free storage space is available. Possible causes are:
Error during the processing of documents in the Document Pipeline
Normally, the documents are processed rapidly and deleted immediately. If
problems occur, the documents may remain in the pipeline and storage space
may become scarce. Check the status of the DocTools (DP Tools group in the
Monitor) and the status of the Document Pipelines in Document Pipeline Info.
Document is larger than the available storage space
If no separate volume is reserved for the Document Pipeline, the storage space
may be occupied by other data and processes. In this case, the volume should be
cleaned up to create space for the pipeline. To avoid this problem, reconfigure
the Document Pipeline and locate it in a separate volume. The volume must be
larger than the largest document that is to be archived.
jbd
Displays the status of the Storage Manager. The status is Active if the server is
running. A status of either Can't call server, Can't connect to server, or Not
active indicates that the server is either not reachable or not running. Check the
jbd.log log file for errors. If necessary, solve the problem and start the Storage
Manager again.
inodes
Displays how full the inode files are. Either the status OK or Error is displayed.
In Details, you can see filling level in percent as well as the number of
configured and used inodes. If an error is displayed, the storage space for the file
system information must be increased.
<jukebox_name>
Provides an overview of the volumes for each attached jukebox. The possible
status specifications are Ok, Warning or Error. Warning means that there are no
writeable volumes or no empty slots in the jukebox. Error is displayed if at least
one corrupt medium is found in a jukebox (display -bad- in Devices in OpenText
Administration Client).
The following information is displayed in Details:
30.2.4 DS Pools
The Monitor checks the free storage space which is available to the pools (and
therefore the logical archives). The pools and buffers are listed. The availability of
the components depends on two factors. Volumes must be assigned and there must
be sufficient free storage space in the individual volumes.
• The Ok status specifies that volumes are present and sufficient storage space is
available.
• The Error status together with the No volumes present message means that a
volume (WORM or hard disk) needs to be assigned to this buffer or pool.
• The Error status with the No writable partitions message refers to WORM
volumes and means that the available volumes are full or write-protected.
Initialize and assign a new volume and/or remove the write-protection.
• The Full status refers to disk buffers or hard disk pools and means that there is
no free storage space on the volume. In the case of a hard disk pool, create a new
volume and assign it to this pool.
In the case of a disk buffer, check whether the Purge_Buffer job has been
processed successfully and whether the parameters for this job are set correctly.
The status is Ok, Warning or Error. In Details, you can see the free storage space in
MB, the total storage space in MB and the proportion of free storage space in
percent. The values refer to the hard-disk volume in which the log directory was
installed.
A warning or error message is issued if insufficient free storage space is available.
Delete all log files that are no longer needed. To avoid problems, delete log files
regularly.
DP Tools
The Monitor checks the availability of the DocTools. The status is Registered if the
DocTool has been started. Various messages can appear under Details for the status:
Lazy
The DocTool is unoccupied. There are no documents available for processing.
Active
The DocTool is processing documents.
Disabled
The DocTool has been locked. To check this status, start Document Pipeline Info.
Here, all the queues that are associated with a locked DocTool are identified by
the locked symbol. In general, a DocTool is only locked if an error has occurred.
Once the problem has been analyzed and eliminated, restart the DocTool in
Document Pipeline Info.
Not registered
The DocTool has not been started.
DP Queues
Monitors all queues of the Document Pipelines and specifies the number of
documents in each queue. Precisely one DocTool is assigned to each queue. One
DocTool can be assigned to multiple queues. You can find the same queues in
Document Pipeline Info but with different names.
Usually, the documents are processed very quickly by the associated DocTool and
the queues are empty. The Empty status is specified. If there are documents in the
queue, the status is set to Not empty. In Details, you find the number of documents
in the queue. To analyze this situation, check the availability of the DocTool under
DP Tools and use the functions provided in Document Pipeline Info.
DP Error Queues
Monitors the error queues and specifies the number of documents in each queue.
There is an error queue for each ordinary queue. Documents in error queues cannot
be processed because of an error. The processing DocTool is specified for each
queue. You can find the corresponding queues in Document Pipeline Info but with
different names.
The error queues are usually Empty. If a DocTool cannot process a document, the
document is moved to the error queue. The status is set to Not empty. In Details,
you can see the number of unprocessed documents. If the same error occurs for all
the documents in this pipeline, then all the documents are gathered in the error
queue. The documents cannot be processed until the error has been eliminated and
the documents have been transferred for processing again with Restart in Document
Pipeline Info.
...doctods
One or more documents cannot be archived.
• In the DocService component group, check the wc component. If Error is
displayed, Archive Server is not available and must be restarted.
• Check the DS Pools component group. If Warning or Error is displayed for
the logical archive in which the document is to be archived or for the
corresponding disk buffer, there is no storage space available for archiving.
Please note the comments on DS Pools above.
...wfcfbc and ...notify
These DocTools are used to subdivide collective documents into single
documents. It is unusual for errors to occur here.
...cfbx
The response cannot be sent to the SAP system.
• The connection to the SAP system is not established. Check the cbfx.log log
file for information on the possible error causes.
• The configuration parameters for setting up the connection are incorrect.
Check the configuration of the SAP system and the archive in the Servers tab
in OpenText Administration Client.
...docrm
The temporary data in the pipeline are not be deleted following the correct
execution of all the preceding DocTools. Start Document Pipeline Info and
remove the documents in the corresponding error queue. You require special
access rights to do this.
31.1 Auditing
The auditing feature of Archive Server traces events of two aspects:
• It records the document lifecycle, or history of a document, when the document
was created, modified, migrated, deleted etc. These are the events of the
Document Service.
• It records administrative jobs performed with Administration Client.
Important
Administrative changes are only recorded if they are done with
Administration Client. To get complete audit trails, make sure that other
configuration ways cannot be used, for example, editing configuration files
directly. At least, such jobs must be logged by other means.
The auditing data is collected in separate database tables and can be extracted from
there with the exportAudit command to files, which can be evaluated in different
ways.
You can define the timeframe for data extraction. Without these dates, you get all
audit data until the current date and time.
With further optional options, you can adept the output to your needs.
Option Description
-a Only relevant for document lifecycle information (-S is set). Extracts data
about all document related jobs on the given timeframe. The generated file
name reflects this option with the ALL indicator: STR-<begin date>-<end
date>-ALL.<ext>.
-x Deletes data from the database after successful extraction. This option is not
supported if -a is set, so only information on deleted documents can be re-
moved from the database after extraction.
-o ext Defines the file format. For example, with -o csv you get a .csv file for
evaluation in Excel, independently of the extracted data.
-h Adds a header line with column descriptions to the output file.
-c sepchar Defines the separator character directly (e.g. -c , ) or as ASCII number in
0x<val> syntax (e.g. -c 0x7c ). The default separator is the semicolon. Con-
sider changing the separator if it does not fit your Excel settings.
Event Description
EVENT_CREATE_DOC Document created
EVENT_CREATE_COMP Document component created on volid1
EVENT_UPDATE_ATTR Attributes updated
EVENT_TIMESTAMPED Document timestamped on volid1 (dsSign,
dsHashTree)
EVENT_TIMESTAMP_VERIFIED Timestamp verified on volid1
Event Description
EVENT_TIMESTAMP_VERIF_FAILED Timestamp verification failed on volid1
EVENT_COMP_MOVED Document component moved from HDSK vo-
lid1 to volid2 (dsCD etc. with -d)
EVENT_COMP_COPIED Document component copied from volid1 to
volid2 (dsCD etc. without -d)
EVENT_COMP_PURGED Document component purged from HDSK vo-
lid1 (dsHdskRm)
EVENT_COMP_DELETED Component deleted from volid1
EVENT_COMP_DELETE_FAILED Component deletion from volid1 failed
EVENT_COMP_DESTROYED Component destroyed from volid1
EVENT_DOC_DELETED Document deleted
EVENT_DOC_MIGRATED Document migrated
EVENT_DOC_SET_EVENT setDocFlag with retention called
EVENT_DOC_SECURITY Security error when attempting to read doc
31.2 Accounting
Archive Server allows collecting of accounting data for further analysis and billing.
To use accounting:
1. Enable the Accounting option and configure accounting in Configuration; see
“Settings for Accounting” on page 318.
The Document Service writes the accounting information into accounting files.
2. Evaluate the accounting data; see “Evaluating Accounting Data” on page 319.
3. Schedule the Organize_Accounting_Data job to remove the old accounting
data (see “Setting the Start Mode and Scheduling of Jobs” on page 100).
If you archive the old accounting data, you can also access the archived files. The
Organize_Accounting_Data job writes the DocIDs of the archived accounting files
into the ACC_STORE.CNT file which is located in the accounting directory (defined in
Path to accounting data files).
To restore archived accounting files, you can use the command
dsAccTool -r -f <target directory>
The tool saves the files in the <target directory> where you can use them as usual.
software packages.
This list is useful when you contact the OpenText Customer Support.
See also:
• “Utilities” on page 251
• “Checking Utilities Protocols” on page 252
Important
Stop the Spawner before you delete the log files!
On client workstations, other log files are used. For more information, refer to the
Imaging documentation.
UNIX
$ORACLE_HOME/network/log/listener.log (log file)
$ORACLE_HOME/network/trace (trace file)
$ORACLE_HOME/rdbms/log/*.trc/* (trace files)
Starting
Windows To start Archive Server using the Windows Services, proceed as follows:
Services
Command line To start Archive Server from the command line, enter the following commands in
this order:
net start OracleServiceECR (Oracle database) or net start mssqlserver (MS
SQL database)
net start Oracle<ORA_HOME>TNSListener (Oracle database)
net start spawner (archive components)
Stopping
Windows To stop Archive Server components using the Windows Services, proceed as
Services follows:
1. On the desktop, right-click the My Computer icon and select Manage.
The Computer Management window now opens.
2. Open the Services and Applications directory and click Services.
3. Right-click the following entries in the given order and select Stop:
• Archive Spawner (archive components)
• Oracle<ORA_HOME>TNSListener (Oracle database)
• OracleServiceECR (Oracle database) or MSSQLSERVER (MS SQL data-
base)
Command line To stop Archive Server components from the command line, enter the following
commands in this order:
net stop spawner (archive components)
net stop Oracle<ORA_HOME>TNSListener (Oracle database)
net stop OracleServiceECR (Oracle database) or net stop mssqlserver (MS SQL
database)
Starting
Use the commands listed below to restart Archive Server after the archive system
has been stopped without shutting down the hardware.
2. Start the archive system including the corresponding database instance with:
HP-UX /sbin/rc3.d/S910spawner start
Stopping
Enter the commands below to terminate Archive Server manually.
2. Check the status of the process with spawncmd status (see “Analyzing
Processes with spawncmd” on page 333).
3. Enter the command:
spawncmd {start|stop} <process>
Description of parameters:
{start|stop}
To start or stop the specified process.
<process>
The process you want to start or stop. The name appears in the first column of
the output generated by spawncmd status.
Important
You cannot simply restart a process if it was stopped, regardless of the
reason. This is especially true for Document Service, since its processes must
be started in a defined sequence. If a Document Service process was
stopped, it is best to stop all the processes and then restart them in the
defined sequence. Inconsistencies can also occur when you start and stop the
monitor program or the Document Pipelines this way.
• reread
• start <service>
• status
• stop <service>
• startall
• stopall
Process status To check the status of the processes, enter spawncmd status in the command line.
A brief description of some processes is listed here:
Process Description
Clnt_dp Client to monitor the Document Pipelines
Clnt_ds Client to monitor the Document Service
admsrv Administration Server
jds Document Service read and write component
ixmonsvc Monitor server process
notifSrvr Notification server process
dp Document Pipelines
jbd STORM daemon
timestamp Timestamp Server
purgefiles removes log files of Tomcat
doctods, docrm, ... various DocTools
You can find information about the DocTools in the Document Pipeline Info. This
interface allows you to start and stop single DocTools and to resubmit documents
for processing.
Important
Higher log levels can generate a large amount of data and even can slow
down the archive system. Reset the log levels to the default values as soon as
you have solved the problem. Delete the log files only after you have
stopped the spawner.
Time setting Additionally to the log levels, you can define the time label in the log file for each
component. Normally, the time is given in hours:minutes:seconds. If you select
Log using relative time, the time elapsed between one log entry and the next is
given in milliseconds instead of the date, additionally to the normal time label. This
is used for debugging and fine tuning.
Annotation
The set of all graphical additions assigned to individual pages of an archived
document (e.g., colored marking). These annotations can be removed again.
They simulate handwritten comments on paper documents. There are two
groups of annotations: simple annotations (lines, arrows, highlighting etc.) and
OLE annotations (documents or parts of documents which can be copied from
other applications via the clipboard).
See also: Notes.
Archive ID
Unique name of the logical archive.
Archive mode
Specifies the different scenarios for the scan client (such as late archiving with
barcode, preindexing).
ArchiveLink
The interface between SAP system and the archive system.
Buffer
Also known as “disk buffer”. It is an area on hard disk where archived
documents are temporarily stored until they are written to the the final storage
media.
Burn buffer
A special burn buffer is required for ISO pools in addition to a disk buffer. The
burn buffer is required to physically write an ISO image. When the specified
amount of data has accumulated in the disk buffer, the data is prepared and
transferred to the burn buffer in the special format of an ISO image. From the
burn buffer, the image is transferred to the storage medium in a single,
continuous, uninterruptible process referred to “burning” an ISO image. The
burn buffer is transparent for the administration.
Cache
Memory area which buffers frequently accessed documents.
Archive Server stores frequently accessed documents in a hard-disk volume
called the Document Service cache. The client stores frequently accessed
documents in the local cache on the hard disk of the client.
Cache Server
Separate machine, on which documents are stored temporarily. That way the
network traffic in WAN will be reduced.
Device
Short term for storage device in the Archive Server environment. A device is a
physical unit that contains at least storage media, but can also contain additional
software and/or hardware to manage the storage media. Devices are:
• Local hard disks
• Jukeboxes for optical media
• Virtual jukeboxes for storage systems
• Storage systems as a whole
Digital Signature
Digital signature means an electronic signature based upon cryptographic
methods of originator authentication, computed by using a set of rules and a set
of parameters such that the identity of the signer and the integrity of the data can
be verified. (21 CFR Part 11)
Disk buffer
See: Buffer
DocID
See: Document ID (DocID)
DocTools
Programs that perform single, discrete actions on the documents within a
OpenText Document Pipeline.
Document ID (DocID)
Unique string assigned to each document with which the archive system can
identify it and trace its location.
DP
See: Document Pipeline (DP)
DPDIR
The directory in which the documents are stored that are being currently
processed by a document pipeline.
DS
See: Document Service (DS)
Hot Standby
High-availability Archive Server setup, comprising two identical Archive
Servers tightly connected to each other and holding the same data. Whenever the
first server becomes out of order, the second one immediately takes over, thus
enabling (nearly) uninterrupted archive system operation.
ISO image
An ISO image is a container file containing documents and their file system
structure according to ISO 9660. It is written at once and fills one volume.
Job
A job is an administrative task that you schedule in the OpenText
Administration Client to run automatically at regular intervals. It has a unique
name and starts command which executes along with any argument required by
the command.
Known server
A known server is an Archive Server whose archives and disk buffers are known
to another Archive Server. Making servers known to each other provides access
to all documents archived in all known servers. Read-write access is provided to
other known servers. Read-only access is provided to replicate archives. When a
request is made to view a document that is archived on another server and the
server is known, the inquired Archive Server is capable of displaying the
requested document.
Log file
Files generated by the different components of Archive Server to report on their
operations providing diagnostic information.
Log level
Adjustable diagnostic level of detail on which the log files are generated.
Logical archive
Logical area on the Archive Server in which documents are stored. The Archive
Server can contain many logical archives. Each logical archive can be configured
to represent a different archiving strategy appropriate to the types of documents
archived exclusively there. An archive can consist of one or more pools. Each
pool is assigned its own exclusive set of volumes which make up the actual
storage capacity of that archive.
Media
Short term for “long term storage media” in the Archive Server environment. A
media is a physical object: optical storage media (CD, DVD, WORM, UDO), hard
disks and hard disk storage systems with or without WORM feature. Optical
MONS
See: Monitor Server (MONS)
Notes
The list of all notes (textual additions) assigned to a document. An individual
item of this list should be designated as “note”. A note is a text that is stored
together with the document. This text has the same function as a note clipped to
a paper document.
Pool
A pool is a logical unit, a set of volumes of the same type that are written in the
same way, using the same storage concept. Pools are assigned to logical archives.
RC
See: Read Component (RC)
Remote Standby
Archive server setup scenario including two (ore more) associated Archive
Servers. Archived data is replicated periodically from one server to the other in
order to increase security against data loss. Moreover, network load due to
document display actions can be reduced since replicated data can be accessed
directly on the replication server.
Replication
Refers to the duplication of an archive or buffer resident on an original server on
a remote standby server. Replication is enabled when you add a known server to
the connected server and indicate that replication is to be allowed. That means,
the known server is permitted to pull data from the original server for the
purpose of replication.
Scan station
Workstation for high volume scanning on which the Enterprise Scan client is
installed and to which a scanner is connected. Incoming documents are scanned
here and then transferred to Archive Server.
SecKey
With SecKeys, you can protect the connections between a client and OpenText
Archive Server. A SecKey is an additional parameter in the URL of the archive
access. It contains a digital signature and a signature time and date. The client
application creates a signature for the relevant parameters in the URL and the
expiration time, and signs it with a private key. Archive Server verifies the
signature with the public key, and accepts requests only with a valid signature
and if the SecKey's expiration time has not been reached.
Slot
In physical jukeboxes with optical media, a slot is a socket inside the jukebox
where the media are located. In virtual jukeboxes of storage systems, a slot is
virtually assigned to a volume.
Spawner
Service program which starts and terminates the processes of the archive system.
Storage Manager
Component that controls jukeboxes and manages storage subsystems.
Volume
• A volume is a memory area of a storage media that contains documents.
Depending on the device type, a device can contain many volumes (e.g. real
and virtual jukeboxes), or is treated as one volume (e.g. storage systems w/o
virtual jukeboxes). Volumes are attached - or better, assigned or linked -
logically to pools.
• Volume is a technical collective term with different meaning in STORM and
Document Service (DS). A DS volume is a virtual container of volumes with
identical documents (after the complete backup is written). A STORM
volume is a virtual container of all identical copies of a volume. For ISO
volumes, there is no difference between DS and STORM volumes. Regarding
WORM (IXW) volumes, the STORM differenciates between original and
backup, they are different volumes, while DS considers original and backup
together as one volume.
WC
See: Write Component (WC)
Windows Viewer
Component for displaying, occasional scanning with Twain scanners and
archiving documents. The Windows Viewer can attach annotations and notes to
the documents.
WORM
WORM means Write Once Read Multiple. An optical WORM disk has two
volumes. A WORM disk supports incremental writing. On storage systems, a
WORM flag is set to prevent changes in documents. UDO media are handled like
optical WORMs.
Write job
Scheduled administrative task which regularly writes the documents stored in a
disk buffer to appropriate storage media.
unlock 154 S
policies 155 SAP as leading application
checking 157 configuring connection 163
creating and modifying 157 scan
overview 156 scenarios 169
pool types scan hosts
HDSK 34 configuring 169
ISO 33 scan stations
IXW 33 archive mode 171
single file (FS) 34 configuring 169
single file (VI) 34 scenario
pools 33 system 209
types 84 scheduling
problem analysis 335 jobs 35
processes secKeys 104
important processes 334 from other applications 105
starting and stopping 331 from SAP 106
status 334 importing certificates 105
protocol security
jobs 101 certificates 103, 117
purge buffer job 31 checksums 103, 126
putcert 106 deleting certificates 119
enabling certificate 119
Q encrypted document storage 103
queues encryption 106
monitor display 311 fingerprint 118
importing certificate for authentication
R 122
recIO 108 importing certificate for timestamp
recover verification 126, 126
IXW volumes 242 key store encryption 125
recovery 245 overview 103
Archive Cache Server 248 PEM file 117
ISO volumes 239 secKeys 104
remote migration 257 secKeys/signed URL 103, 104
Remote Standby Server 181 SSL 103
report timestamps 103, 111
system 209 verifying certificate 118
restore Set Encryption Certificates 125
ISO volumes 239 signature renewal
IXW volumes 242 renewing hash tree 115
restoring single file (FS) 34
See “recovery” single file (VI) 34
retention 69 single file storage 32
retention settings 81 single instance 67
RSS spawncmd 333
See “Remote Standby Server” Spawner
See “Archive Spawner”
standard users 155
U
unavailable volumes 62