You are on page 1of 183

Chariot User Guide, version 3.

(June-99)

Copyright (C) Ganymede Software Inc., 1995-1999. All rights reserved.


Ganymede Software Inc.
1100 Perimeter Park Drive Suite 104
Morrisville, North Carolina 27560
U.S.A.
Web: http://www.ganymede.com/

ii

Chariot User Guide, version 3.1

Contents

iii

Contents
Preface

ix
Ganymede Software Inc. End-User License Agreement, Console Software........................... ix
Introductory Material............................................................................................................. xi
Trademarks..........................................................................................................................xii

Welcome

1
What You Need to Know ....................................................................................................... 1
Whats New in Chariot 3.1 ..................................................................................................... 1
New Functions in Chariot 3.1.................................................................................... 1
Usability Improvements ............................................................................................ 2
Version 3.1 Compatibility Considerations .................................................................. 2
About This Manual ................................................................................................................ 3
Conventions in This Manual................................................................................................... 4
Chariot Product Types........................................................................................................... 4
Use Our Extensive Online Help ............................................................................................. 4

Introducing Chariot

A Brief Walkthrough .............................................................................................................. 8


How Chariot Works .................................................................................................. 9
Running Your First Chariot Test............................................................................................. 9

Installing Chariot

11

The Chariot Package........................................................................................................... 11


Step-by-Step Installation Instructions................................................................................... 11
Hardware/Software Requirements for Windows NT................................................. 12
Hardware/Software Requirements for Windows 95/98 ............................................ 13
Installing Chariot 3.1 Over Chariot 2.2 .................................................................... 14
Running Console Setup for Windows 95, 98, and NT.............................................. 14
Removing the Chariot Package (Uninstall) .............................................................. 16

Configuring Chariot in Your Network

19

APPC Configuration ............................................................................................................ 20


APPC on Windows NT with IBM Communications Server or Personal
Communications .................................................................................................... 21
APPC on Windows NT with Microsoft's SNA Server................................................ 21
Selecting a Service Quality (APPC Mode Name) .................................................... 24
General APPC Topics ............................................................................................ 25
IPX and SPX Configuration.................................................................................................. 26
Determining your IPX Network Address (Windows NT) ........................................... 27

iv

Chariot User Guide, version 3.1

RTP, TCP, and UDP Configuration...................................................................................... 27


Determining your IP Network Address .................................................................... 27
Selecting a Service Quality (for RTP, TCP, and UDP)............................................. 28
Trying Out the TCP/IP Connection ......................................................................... 30
Sockets Port Number............................................................................................. 30
Microsofts WinSock 2 Software ............................................................................. 30

Working with Datagrams and Multimedia Support

33

Understanding Datagram Support ....................................................................................... 33


Network Applications: Connection-oriented vs. Connection-less.............................. 33
Understanding Reliable Datagram Support .......................................................................... 34
How Endpoints Emulate Reliable Datagram Delivery .............................................. 34
Tuning Your Tests to Emulate Reliable Datagram Applications ............................... 35
Understanding Multimedia Support...................................................................................... 38
Delivering Data: Unicast, Broadcast, and Multicast................................................. 38
How Endpoints Emulate Multimedia Applications.................................................... 38
Modifying the Multimedia Run Options for the Test................................................. 39
Other Factors Affecting Multimedia Performance.................................................... 40
RTP Configuration ................................................................................................. 40
Understanding Jitter Measurements....................................................................... 41
IP Multicast ......................................................................................................................... 42
Emulating IP Multicast Applications ........................................................................ 43
Setting Up Your Hardware and Software For IP Multicast ....................................... 45

Operating the Console

47

Creating and Running Tests: An Overview .......................................................................... 47


The Main Window ............................................................................................................... 48
The File Menu (Main Window)................................................................................ 49
The Options Menu (Main Window) ......................................................................... 49
The Tools Menu..................................................................................................... 55
The Help Menu ...................................................................................................... 60
Getting Information About Chariot .......................................................................... 60
Keys Help for the Main Window.............................................................................. 60
The Test Window ................................................................................................................ 61
The Status Bar (Test Window) ............................................................................... 62
The File Menu (Test Window) ................................................................................ 62
The Edit Menu (Test Window) ................................................................................ 66
The View Menu...................................................................................................... 72
The Run Menu ....................................................................................................... 77
The Window Menu ................................................................................................. 84
The Test Setup Tab ............................................................................................... 85
The Throughput Tab .............................................................................................. 85
The Transaction Rate Tab...................................................................................... 85
The Response Time Tab........................................................................................ 86

Contents

The Lost Data Tab ................................................................................................. 86


The Jitter Tab......................................................................................................... 87
The Raw Data Totals Tab....................................................................................... 87
The Endpoint Configuration Tab ............................................................................. 88
The Datagram Tab ................................................................................................. 88
Examining Your Timing Records............................................................................. 89
Keys Help for the Test Window............................................................................... 91
The Comparison Window..................................................................................................... 92
The File Menu (Comparison Window) ..................................................................... 93
The Edit Menu (Comparison Window)..................................................................... 94
Keys Help for the Comparison Window................................................................... 95
Working with the Error Log Viewer....................................................................................... 96
The File Menu (Error Log Viewer)........................................................................... 97
The View Menu (Error Log Viewer) ......................................................................... 97
The Options Menu (Error Log Viewer) .................................................................... 98
Keys Help for the Error Log Viewer......................................................................... 99
Working with the Script Editor.............................................................................................. 99
Editing a Parameter of a Script Command ............................................................ 101
Editing a Script Variable ....................................................................................... 101
The File Menu (Script Editor) ................................................................................ 102
The Edit Menu (Script Editor) ............................................................................... 104
The Insert Menu (Script Editor)............................................................................. 106
Script Editor Keys Help......................................................................................... 107
File Types and How They are Handled .............................................................................. 108

Using the Command-Line Programs

111

RUNTSTRunning Tests ................................................................................................... 111


FMTTSTFormatting Test Results ..................................................................................... 112
CLONETSTReplicating Pairs in a Test ............................................................................. 114
FMTLOGFormatting Binary Error Logs ............................................................................ 116

Viewing the Results

117

Reading Your Test Results ................................................................................................ 117


Summary, Run Options, and Test Setup Section .................................................. 117
Test Totals Section............................................................................................... 118
Endpoint Pair Details ............................................................................................ 125
Technical Details ............................................................................................................... 128
Confidence Intervals............................................................................................. 128
Relative Precision................................................................................................. 129
Understanding Timing .......................................................................................... 129

Tips for Testing

133

Simplifying Network Configuration...................................................................................... 133


Performance Tuning for TCP/IP............................................................................ 133

vi

Chariot User Guide, version 3.1

Performance Tuning for APPC ............................................................................. 135


Making Tests More Flexible.................................................................................. 137
Testing Through Firewalls ................................................................................................. 138
Console through Firewall to Endpoint 1 ................................................................ 138
Endpoint 1 through Firewall to Endpoint 2 ............................................................ 139
Designing Chariot Performance Tests ............................................................................... 141
Throughput Testing.............................................................................................. 142
Streaming Testing................................................................................................ 142
How Long Should a Performance Test Run? ........................................................ 142
Multiple Endpoint Pairs on One Computer ............................................................ 143
Short versus Long Connections............................................................................ 143
Using Chariot for Stress Testing........................................................................................ 144
Using RUNTST for Stress and Regression Testing............................................... 144
Getting Consistent Results ................................................................................................ 145
Take Care When Changing Software Versions ..................................................... 146
Guidelines for Choosing Your Data.................................................................................... 147
Creating Your Own User Data Files...................................................................... 147
Avoiding Too Many Timing Records .................................................................................. 148
Using Non-Streaming Scripts ............................................................................... 149
Using Streaming Scripts....................................................................................... 150
Automating Tests To Form a Simple Monitor ..................................................................... 150

Troubleshooting

153

Reading Error Messages................................................................................................... 153


Viewing Detailed Error Information ....................................................................... 154
Determining Which Computer Detected the Error ................................................. 154
Common Problems ........................................................................................................... 155
Insufficient Threads.............................................................................................. 155
Insufficient Resources.......................................................................................... 155
Protection Faults and Traps ................................................................................. 155
Assertion Failures ................................................................................................ 155
Damaged Files..................................................................................................... 156
Locale Could Not Be Determined ......................................................................... 156
If You Find a Problem ....................................................................................................... 156
Functional Limitations and Known Problems...................................................................... 157
Known Problems in Microsofts TCP/IP for Windows NT ....................................... 157
Known Problems in Microsofts IPX/SPX for Windows NT..................................... 157
Known Problems in Microsofts SNA Server ......................................................... 158
Known Problems in IBMs Communications Server for Windows NT ..................... 158
Known Problems in IBMs Personal Communications for Windows NT.................. 159
Known Problems in IBMs Personal Communications for Windows 95................... 159
Getting the Latest Fixes and Service Updates ................................................................... 160
Updates for Microsoft Windows NT ...................................................................... 160
Updates for Microsoft Windows 95 ....................................................................... 160
Updates for Microsoft Windows 98 ....................................................................... 160

Contents

vii

Updates for Microsoft SNA Server ........................................................................ 160


Updates for IBMs SNA Software for Windows NT ................................................ 160
Updates for Novell Client Software ....................................................................... 161

Ganymede Software Customer Care

163

Customer Service.............................................................................................................. 163


Troubleshooting Guidelines ............................................................................................... 163
How to Get Technical Support ........................................................................................... 164

Index

165

viii

Chariot User Guide, version 3.1

Preface

ix

Preface
Ganymede Software Inc. End-User License Agreement, Console
Software
CHARIOT CONSOLE SOFTWARE
Grant of License. Ganymede is licensing (not selling) the enclosed software (the Software) to you in object
code form. Subject to the terms of this Agreement, you have the non-exclusive, non-transferable right to do the
following: (a) install the Software on a single computer (the Designated CPU); (b) to use and operate the
Software on the Designated CPU in connection with the platform set forth with the accompanying Software, to
run up to the number of simultaneous tests specified on the accompanying invoice for the license fee; (c) make
ONE copy of the Software for backup and archival purposes, provided that you also keep the original copy of
the Software in your possession; and (d) use the documentation contained in this package (the
Documentation) during the term of this Agreement in support of your use of the Software.
Protection of Software. You agree to take all reasonable steps to protect the Software and Documentation
from unauthorized copying or use. The Software and Documentation represent and contain certain copyrighted
materials, as well as trade secrets and other valuable proprietary information of Ganymede and/or its licensors.
The source code and embodied proprietary information and trade secrets are not licensed to you, and any
modification, addition or deletion is strictly prohibited. You agree not to disassemble, decompile, or otherwise
reverse engineer the Software, or examine network flows or related flow methodology employed by the
Software, in order to discover the source code or other proprietary information and trade secrets contained in
the Software.
Restrictions. You agree that you may not: (a) use, copy, merge, or transfer copies of the Software or the
Documentation, except as specifically authorized in this Agreement; (b) use the backup or archival copy of the
Software (or permit any third party to use such copy) for any purpose other than to replace the original copy in
the event that it is destroyed or becomes defective; or (c) rent, lease, sublicense, distribute, transfer, modify, or
timeshare the Software, the Documentation or any of your rights under this Agreement, except as expressly
authorized in this Agreement.
Ownership. Ganymede and its licensors own all rights of authorship, including copyright, in and to the
Software and the Documentation. Ganymede continues to own the copy of the Software contained in this
package and all other copies that you are authorized by this Agreement to make (all such authorized copies
being expressly included in the term Software, as used in this Agreement). You shall own only the magnetic
or other physical media on which the Software is recorded. Ganymede and/or its licensors reserve all rights
not expressly granted to you in this Agreement.
Term. If the Software and Documentation are being licensed to you for evaluation purposes, this Agreement
will be effective for fifteen (15) days, beginning on the date you install the Software on the Designated CPU, or
such earlier date as you destroy or return the Software and Documentation. Upon your payment in full of the
applicable license fee, this Agreement will be effective until terminated. You may terminate this Agreement by
destroying or returning the Software and Documentation and all copies thereof. This Agreement will also
terminate if you fail to comply with any term or condition of this Agreement. You agree, upon any such

Chariot User Guide, version 3.1

termination by Ganymede, to destroy all copies of the Software and Documentation or return them, postage
prepaid, to Ganymede at the address set forth below. Except as provided in the following section, returning the
Software to Ganymede following the opening and/or use of the Software will not entitle you to a refund.
LIMITED WARRANTY
Compatibility. The Software is only compatible with certain operating systems. THE SOFTWARE IS NOT
WARRANTED FOR NON-COMPATIBLE SYSTEMS. Please consult the specifications contained in the
accompanying user Documentation for more information concerning compatibility.
Magnetic Media and Documentation. Ganymede warrants that if the magnetic media or Documentation are
in a damaged or physically defective condition at the time that the license is purchased and if they are returned
to Ganymede (postage prepaid) within 90 days of purchase, Ganymede will provide you with replacements at
no charge.
Software. Ganymede warrants that if the Software fails to conform substantially to the specifications set forth
in the Documentation and if the non-conformity is reported in writing by you to Ganymede within 90 days
from the date that the license is purchased, Ganymede will, at Ganymedes option, either remedy the nonconformity or offer to refund the license fee to you upon return of all copies of the Software and Documentation
to Ganymede. In the event of a refund, this Agreement shall terminate.
GANYMEDE MAKES NO OTHER REPRESENTATIONS OR WARRANTIES, EITHER EXPRESS
OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, GANYMEDE MAKES NO
REPRESENTATIONS OR WARRANTIES OF MERCHANTABILITY, TITLE OR FITNESS FOR
ANY PARTICULAR PURPOSE. IN NO EVENT SHALL GANYMEDE BE RESPONSIBLE FOR ANY
INCIDENTAL OR CONSEQUENTIAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOSS
OF DATA OR LOST PROFITS AS A RESULT OF YOUR USE OF, OR INABILITY TO USE, THE
SOFTWARE, EVEN IF GANYMEDE IS MADE AWARE OF THE POSSIBILITY OF SUCH
DAMAGES.
This limited warranty gives you specific legal rights. Some states do not allow limitations on how long an
implied warranty lasts, or on incidental or consequential damages, so the above limitations may not apply to
you. You may also have other legal rights, which vary from state to state.
Limitation of Remedies. Ganymedes entire liability and your exclusive remedy under this Agreement are
limited to correction of defects, replacement of the magnetic media containing the Software or refund of the
license fee, at Ganymedes option.
Responsibilities of Licensee. As a licensee of the Software, you are solely responsible for the proper
installation and operation of the Software in accordance with the instructions and specifications set forth in the
Documentation. Ganymede shall have no responsibility or liability to you, under the limited warranty or
otherwise, for improper installation or operation of the Software. Any output or execution errors resulting
from improper installation or operation of the Software shall not be deemed defects for purposes of the
limited warranty set forth above.
GENERAL PROVISIONS
Governing Law. This Agreement shall be governed by and construed in accordance with the laws of the State
of North Carolina, except as to copyright and trademark matters governed by United States Laws and
International Treaties. This Agreement shall inure to the benefit of Ganymede, its successors and assigns.
This Agreement is deemed entered into in Wake County, North Carolina.

Preface

xi

Entire Agreement. This Agreement sets forth the entire understanding between you and Ganymede with
respect to the subject matter hereof. This Agreement may be amended only in a writing signed by Ganymede
and by you. Nothing contained in any purchase order, acknowledgment, invoice or other form submitted by
you in connection with the license of the Software shall amend or affect the provisions of this Agreement. NO
VENDOR, DISTRIBUTOR, DEALER, RETAILER, SALES PERSON OR OTHER PERSON IS
AUTHORIZED TO MODIFY THIS AGREEMENT OR TO MAKE ANY WARRANTY,
REPRESENTATION OR PROMISE WHICH IS DIFFERENT THAN, OR IN ADDITION TO, THE
REPRESENTATIONS OR PROMISES OF THIS AGREEMENT.
Export. Export of the Software and the Documentation outside of the United States is subject to the Export
Administration Regulations of the Bureau of Export Affairs, United States Department of Commerce. In the
event you desire to export the Software outside the United States, the Software shall at all times remain subject
to the terms of this Agreement, and you agree to be responsible, at your own expense, for complying with all
applicable regulations governing such export. Ganymede makes no warranty relating to the exportability of the
Software to any particular country.
Waiver. No waiver of any right under this Agreement shall be effective unless in writing, signed by a duly
authorized representative of Ganymede. Failure to insist upon strict compliance with this Agreement shall not
deem to be a waiver of any future right arising out of this Agreement.
Severability. If any provision of this Agreement is held by a court of competent jurisdiction to be invalid or
unenforceable, such provision shall be fully severable, and this Agreement shall be construed and enforced as if
the illegal, invalid or unenforceable provision had never been a part of this Agreement.
If you have any questions concerning this Agreement, please contact:
Ganymede Software Inc.
1100 Perimeter Park Drive, Suite 104
Morrisville, North Carolina 27560
U.S.A.
Telephone: 919-469-0997
Facsimile: 919-469-5553

Introductory Material
All examples with names, company names, or companies that appear in this manual are imaginary and do not
refer to, or portray, in name or substance, any actual names, companies, entities, or institutions. Any
resemblance to any real person, company, entity, or institution is purely coincidental.
Ganymede Software may have patents and/or pending patent applications covering subject matter in this
manual. The furnishing of this document does not give you any license to these patents.
Printed in the United States of America.

Chariot User Guide, version 3.1

Trademarks
CHARIOT is a federally registered trademark of Ganymede Software Inc., registration number 1,995,601.
GANYMEDE and GANYMEDE SOFTWARE are federally registered trademarks of Ganymede Software
Inc., registration number 2,053,321. Pegasus is a trademark of Ganymede Software Inc.
IBM, IBM PC, and OS/2 are registered trademarks of International Business Machines Corporation. Intel is a
registered trademark and 80386, 386, 486, and Pentium are trademarks of Intel Corporation. Microsoft and
Windows NT are registered trademarks, and Windows is a trademark of Microsoft Corporation.
Other product names mentioned in this manual may be trademarks or registered trademarks of their respective
companies and are the sole property of their respective manufacturers.

Welcome

Welcome
Welcome to Chariot, by Ganymede Software Inc. Chariot is the standard for testing and monitoring the
performance of client/server networks. Chariot can help you:

Test the performance and capacity of network hardware and software


Compare competing network products before purchase
Identify the source of performance problems
Predict the effects of running new applications
Measure network performance
Tune your network
Avoid downtime by stress testing your network after changing configurations
Monitor the performance youre getting from network service providers

Chariot version 3.1 is designed to be used with version 3.3 of the Performance Endpoints.

What You Need to Know


This manual assumes:

You are familiar with basic concepts and terminology for the operating system youre using at the console
(Microsofts Windows 95, 98, or NT).

You are familiar with the network protocols supported by the console, and with other network protocols
you may run between endpoints in a test. You need to understand setting up programs that use those
application programming interfaces (APIs).

Whats New in Chariot 3.1


Weve enhanced Chariot to improve how you do performance and stress testing. See the following sections for
more information:

New Functions in Chariot 3.1 on page 1


Usability Improvements on page 2
Version 3.1 Compatibility Considerations on page 2

New Functions in Chariot 3.1


RTP Support
Chariot now supports the Real-time Transport Protocol (RTP) for pairs or multicast groups using
streaming scripts. This protocol is used by many leading voice and video applications. You can use RTP
for either unicast tests or multicast tests. See RTP Configuration on page 40 for information on this
protocol. The RTP_PAYLOAD_TYPE command has been added to all streaming scripts. This command
identifies the type of data that the script is emulating.

Chariot User Guide, version 3.1

Jitter Statistics with RTP Tests


You can now view jitter statistics for tests using the RTP protocol. The Test Window now contains a Jitter
tab to show this data. See Understanding Jitter Measurements on page 41 for information on what jitter
is and how jitter is calculated. See The Jitter Tab on page 87 for information on this new tab.
CPU Utilization
You can now select to receive CPU utilization data about the endpoint computers running a test. To
receive CPU utilization data, select the Collect endpoint CPU utilization checkbox on the Run Options
dialog. The CPU utilization data is shown on the Raw Data Totals Tab if this checkbox is selected. See
Changing the Default Run Options on page 50 in the Operating the Console chapter for more
information.
CSV Export
This release of Chariot lets you export test results to the file format CSV. This file format is used by
popular spreadsheet programs, such as Microsoft Excel and Lotus 1-2-3. To export a test to the CSV file
format, select the Export to CSV menu item from the Export submenu on the File menu. See Print and
Export Options on page 62 in the Test Window section.
Because the versatile CSV file format has been added in this release, this is the last release the export to
the WK3 file format will be supported. In this release, the export to WK3 feature does not export the jitter
data or the CPU utilization data.
Chariot Application Programming Interface (API)
Weve added an application programming interface for the Chariot functions that let you create, run, and
extract results from programs written in C or the Test Control language (TcL). See the Chariot
Programming Reference manual for information on how to build and execute programs written for the
Chariot API.

Usability Improvements
Firewall Options tab
To make testing through firewalls easier, weve added several new options to help use Chariot to test
through a firewall. These options are located on the Firewall Options tab of the User Settings notebook
(formerly the Reporting Ports tab). See Changing Your Firewall Options in the Operating the Console
chapter on page 54.
New Legends on Some Graphs
We have added some new legends on the graphs to improve the readability and usability of the graphs.
Throughput Units in Gbps
You can now show throughput units in Gbps. Select this option on the Throughput Units tab of the User
Settings notebook. See Changing Your Throughput Units in the Operating the Console chapter on page
52.

Version 3.1 Compatibility Considerations


Test files
The Chariot test you used in previous versions will load and run fine in version 3.1. You can save test
files in either version 3.1 or version 2.2 format. Test files saved in version 3.1 cannot be used with earlier
versions of the Chariot console.
Script files
There are ways to save scripts you created or modified in version 2.2 as version 2.1 scripts, with
limitations, described in the Messages and Application Scripts manual.
If you want to use a script in Pegasus version 1.2, save the script as a version 2.2 script. If you want to use
a script in Pegasus version 1.2, save the script as a version 2.1 script.

Welcome

In previous versions of Chariot, the following scripts had invalid transaction loop counts:

BackWeb Signup and InfoPak Download


BackWeb Updated
Headliner Initial Load
PointCast v1 Initial/Subsequent Update
PointCast v2 Initial/Subsequent Update
SAP R/3, Authorize Payment on Invoice
SAP R/3, Create Purchase Order
SAP R/3, Login
SAP R/3, Prepare an Invoice
When you install Chariot 3.1, the corrected version of these scripts are installed. If you want to preserve
the previous version of these scripts, press the Yes button on the Do you want to preserve your existing
script files and variables before uninstalling? message box. After the installation is complete, manually
copy these files from the GANYMEDE\PRESERVERD_CHARIOT_SCRIPTS directory to the
GANYMEDE\CHARIOT\SCRIPTS directory. Tests files that included any of these scripts will continue to
use the old transaction loop counts.
Endpoint programs version 3.3
Chariot endpoint programs from previous versions operate correctly with version 3.1 consoles and
endpoints, when running tests and scripts with version 2.2 functionality. However, if your test contains
functionality new in version 3.1 and you are using endpoints other than version 3.3, your test will not start
successfully. A message, displayed to the console, lets you know which new functions are not supported by
an older endpoint.
RUNTST and FMTTST programs
These console programs can read test files from version 3.1 and version 2.2. However, the new RUNTST
writes test files in version 3.1 format, which cant be read by older versions of RUNTST or FMTTST.

About This Manual


This manual is both a users guide and a reference. Here is an overview of each chapters contents:
Introducing Chariot on page 7 describes concepts used in Chariot and presents a brief walkthrough.
Installing Chariot on page 11 describes the elements of the Chariot package, what hardware and software
Chariot supports, and how to install the Chariot programs on your computers.
Configuring Chariot in Your Network on page 19 describes how to set up your communications network
before you run Chariot.
Working with Datagrams and Multimedia Support on page 33 describes how the endpoints operate when
doing testing with datagrams, streaming scripts, and IP Multicast.
Operating the Console on page 47 describes how to create test files, how to run tests, and how to view the
results of tests.
Using the Command-line Programs on page 111 describes a set of programs that let you run tests, view
results, clone existing tests, and view error logs. Use these in situations, such as automated testing, where you
don't need a graphical user interface.
Viewing the Results on page 117 describes how to interpret the formatted test results.

Chariot User Guide, version 3.1

Tips for Testing on page 133 describes topics to help improve the test results you get from Chariot. For
example, it discusses how long a test should run, using short and long connections, and things to avoid.
Troubleshooting on page 153 describes the Chariot error logs, common problems, and how to resolve
problems with communication stacks.
Service and Support on page 164 describes how to work with Ganymede Software if you encounter problems.

Conventions in This Manual


We use the following style conventions throughout this manual.
ALL CAPS
A string in all capital letters signifies a filename, a program name, or one of the script commands used in
application scripts.
bold
Bold highlighting, other than in headings and titles, indicates a menu, a menu item, or field name.
Courier font

The Courier font indicates a command to enter, or the output of a computer program.
underlined text
Underlined text indicates a hyperlink to another book or chapter, or to an Internet URL.

Chariot Product Types


The Chariot console and endpoint programs are available as several product types, such as Retail, Evaluation,
Demo, Lite, API, and Beta. Each product type is limited in some way; for example, they may be limited in the
period of time that they will run or may be limited in the number of pairs in a test. Contact the Ganymede
Software sales team if you have questions about the product types you are using. See Customer Service on
page 163 in the Ganymede Software Customer Care chapter.

Use Our Extensive Online Help


Chariot comes with an extensive online help system. The Chariot Help menu item is created in the Chariot
program folder during installation. When you select a help button in a dialog, Chariot transfers you to the
appropriate topic in the Chariot User Guide and gives you information about that dialog. While in the online
help, you can also select other topics, and the Web browser will transfer you to the appropriate section of the
documentation.
Here's how to find what you need in our online help.

The Home button, at the top of each chapter, takes you to our online library. From there you can select
any of our online books.

The Index button, at the top of each chapter, takes you to the index for the current online book. You can
scroll through the alphabetized index, or you can use your browser's text search feature (click Edit/Find in

Welcome

version 4.x or later of Microsofts Internet Explorer or Netscapes Navigator/Communicator) to move more
quickly through the index.

Each book has a table of contents at the left. Click on any chapter name to read it.

Each chapter has a table of contents at the top. Click on a section name to jump to it. Use your browsers
scroll bars to move through the text.

The Top buttons, at the right of each section, take you to the start of the current chapter.

The Chapter buttons, at the bottom of each chapter, take you to the next or previous chapter.

Chariot User Guide, version 3.1

Introducing Chariot

Introducing Chariot
Chariot is designed to measure the performance between pairs of networked computers. Different kinds of
distributed applications can be emulated, and the results of running the emulated applications can be captured
and analyzed.

You operate Chariot from its console, a program with a graphical user interface that lets you create and run
tests. To create a test, you determine which computers to use and the type of data traffic you want to run
between them. Chariot refers to each of these computers as a Performance Endpoint, or simply endpoint. An
endpoint pair comprises the network addresses of the two computers, the network protocol to use between
them, and the type of application to emulate. Tests can include just one endpoint pair, or be more complex,
running hundreds of endpoint pairs using a mix of network protocols and application flows.
For each endpoint pair, you select an application script which emulates the application youre considering.
Endpoints use the script to create the same data flows that an application would, without the application having
to be installed. Chariot comes with a set of predefined application scripts, providing standard performance
benchmarks and emulating common end-user applications.
The operation of Chariot is centered at its console. For example, all test files and scripts are stored at the
console and distributed to the endpoints. The endpoint software is generally installed once and rarely touched.

Chariot User Guide, version 3.1

A Brief Walkthrough
Here is an example of how you use Chariot.
1.

Start the console program.


The consoles Main window lets you create a new test or open an existing one.

2.

Create a new test.


A test consists of a list of endpoint pair addresses and the scripts to be run between them. See The File
Menu (Main Window) on page 49 in the Operating the Console chapter for more information.

3.

Create the first endpoint pair, by specifying the network addresses of Endpoints 1 and 2, and the protocol
to use between them.
This can be done by explicitly typing in their network addresses, or by selecting from a list of network
addresses saved from previous tests.

4.

Select an application script for the endpoint pair.


Chariot comes with dozens of predefined scripts that emulate common types of distributed applications.
The Messages and Application Scripts manual discusses the predefined application scripts and their
settings.

5.

Modify the variables associated with the script.


A script contains variables that let you change the scripts behavior. For example, in a file-transfer script,
you can vary the size of the file, the number of files to transfer, and the amount of time to pause between
each transfer. (Data is sent and received without reading from or writing to a disk.) All script variables
have default values.
You can add more endpoint pairs to the test, or leave it with only one endpoint pair. If you specify more
than one pair, they are all run simultaneously.

6.

Run the test.


The scripts are sent to each of the computers that youve specified as Endpoint 1. These endpoints ensure
that they can communicate with their respective Endpoint 2 partners, and then pass them half of the script.
When all of the endpoints are ready to start the test, the Console directs the Endpoint 1 programs to start
executing the scripts.
As an endpoint reaches checkpoints within a script, it creates timing records. These timing records are
sent back to the console, which uses them to calculate the statistics about a test run. This operation is
shown below in "How Chariot Works" on page 9.

7.

View the results.


You get a summary of the test results when a test is run. You can see the minimum, maximum, and
average measurements for throughput, response time, and transaction rates. For streaming scripts, you can
see how much data, if any, was lost. You can also see the details of each timing record created during a
test. See the Viewing the Results chapter on page 117 for more information.

8.

Save the results.


The results can be saved in a file, together with the test and script(s) used to generate them. If you want to
do further processing of the results, you can export them to a CSV file for use with a product like Excel or
Lotus 1-2-3. You can also export results in HTML format, ready to be loaded into a Web browser.

Introducing Chariot

How Chariot Works


Chariot operates by distributing test setup information from the console to each Endpoint 1 computer when a
test is run. Heres a simple example, with one endpoint pair.

The key flows in the above picture are numbered and described below.
1.

A test is created at the console, and the user presses the Run button. The console sends the setup
information to Endpoint 1, including the application script, the address of Endpoint 2, the protocol to use
when connecting to Endpoint 2, the service quality to use, how long to run the test, and how to report
results.

2.

Endpoint 1 keeps its half of the application script, and forwards the other half to Endpoint 2. When
Endpoint 2 has acknowledged it is ready, Endpoint 1 replies to the console. When all endpoint pairs are
ready (in this example, theres just one pair), the console directs them all to start.

3.

The two endpoints execute their application script, with Endpoint 1 collecting the timing records.

4.

Endpoint 1 returns timing records to the console, which displays the results.

Running Your First Chariot Test


You can start testing with Chariot as soon as you install a console on your computer by using it as the console
and as both endpoints. This first test assumes you already have a TCP/IP stack installed and set up correctly on
your computer. This is what you do next:

Start Chariot by clicking on Chariot console in the Chariot folder.

In the Main window, press the New button to get to a Chariot Test window. The Test window lets you add
pairs of endpoints to a test.

Click on the Add pair icon in the toolbar (two squares connected by a parallel line; it should be the first
icon that is not grayed). The Add an Endpoint Pair dialog appears.

10

Chariot User Guide, version 3.1

Enter the following information:

The addresses of the two endpoints


Enter an IP Address of 127.0.0.1 (the local IP address of your computer) in both the Endpoint 1
network address and Endpoint 2 network address fields.

The network protocol


The Network protocol should already be set to TCP.

The script to run between the endpoints


Click on the Open a script file button.
Double click on the script named CREDITL.SCR.

Press the OK button to close the Add an Endpoint Pair dialog. You have set up a test with one pair.

Run the test, by clicking on the Run icon in the toolbar (a stick figure and a green sign).
The test runs using your computer as the console and as the endpoints. The IP address of 127.0.0.1
eliminates the need to know any other computers TCP/IP addresses. The throughput and response time
numbers you get are constrained by the speed of your CPU, since the whole test runs completely inside
your computer.

Installing Chariot

Installing Chariot
Before you begin working with Chariot:
1.

Check the Chariot package to make sure you have all the contents. For details on the contents of the
package, see the The Chariot Package on page 11.

2.

Make sure you have the required hardware and software to run Chariot.

3.

Read the README.TXT file.

4.

Decide which computers youll use as the console and the endpoints.

5.

Make sure you have a Web browser installed on your chosen console computer.

6.

Run the installation program at these computers.

7.

Ensure the console and the endpoint programs are configured correctly for the network protocol(s)
they are using.

The Chariot Package


Your Chariot console package should include the following:

A printed Chariot User Guide, containing information on the Chariot console

A Chariot CD-ROM

A registration card containing your unique Chariot registration number

A printed Performance Endpoint manual, containing information on the Performance Endpoints

A Performance Endpoint CD-ROM

A printed Messages and Application Scripts manual containing information on the application scripts
and a listing of all messages

Make sure you have everything listed here. Please contact us if anything is missing; see the Ganymede
Software Customer Care chapter on page 163 for information about contacting us.

Step-by-Step Installation Instructions


The Chariot console is a 32-bit application for Windows 95, 98, and NT. This section describes the
supported hardware and software, and step-by-step instructions on installing the Chariot console.
When working with communications software, its best to take one step at a time, to avoid interaction
problems. Before you install Chariot, we recommend that you ensure the network connections are
working between the computers youll be using in the test.

11

12

Chariot User Guide, version 3.1

See the Configuring Chariot in Your Network chapter on page 19 for more information on ways to find
the network addresses of the computers and check out the connections between them. Complete the steps
in that chapter before attempting to run Chariot.
When you install the console, you can choose to install the endpoint on the same computer. This lets you
get started with Chariotrunning tests within a single computerbefore you run it across a real network.

Hardware/Software Requirements for Windows NT


Heres what youll need to run the Chariot console on Microsoft Windows NT:

An x86 computer capable of running Microsofts Windows NT well. This implies a CPU such as an
Intel 80386, 80486, a member of the Pentium family, or equivalent. A Pentium or better is
recommended.

At least 32 MBytes of random access memory (RAM)


The total RAM requirement depends on the RAM usage of the underlying protocol stack and the
number of concurrent endpoint pairs. For large tests involving either hundreds of pairs or thousands
of timing records, additional memory may be required.

A hard disk with at least 16 MBytes of space available


Additional disk space may be required for your swapfile, as well, if you have large tests involving
either hundreds of pairs or thousands of timing records and your RAM is inadequate to hold
everything in memory.

A CD-ROM drive on the computer you are installing the Chariot Console.

A Web browser. Because Chariot online help is now in HTML format, you need a Web browser to
view the help. We recommend version 4.x (or later) of either Netscape Navigator or Microsoft
Internet Explorer.

For best results, we recommend using a color palette of at least 256 colors.
We also recommend that you get up-to-date with the latest Windows NT service levels. See the
Troubleshooting chapter for more information on how to get the latest patches and Service Packs.
You also need compatible network protocol software:
for APPC, one of the following
Three APPC stacks for Windows NT are supported by the Chariot console.

IBM Personal Communications AS/400 and 3270 for Windows NT version 4.11 or higher (its
short name is PCOMM for Windows NT): runs on x86 computers where its communications
APIs are installed.

IBM Communications Server for NT version 5.0 or higher: runs only on the server computer of
Communications Servers split stack model.

Microsoft Windows NT SNA Server for x86: runs on either a client or the server computer of
SNA Servers split stack model. We recommend version 4.0 of SNA Server with its latest
service packs. At a minimum, you need Microsoft SNA Server version 2.11 with Service Pack 2.
Versions prior to 2.11 Service Pack 2 have bugs which youll encounter quickly.

Installing Chariot

for IPX/SPX
SPX software is provided as part of the network support in the Windows NT operating system.
Microsoft improved their SPX support with SPX II. SPX II is also present on Novell NetWare 4.x.
SPX II allows a window size greater than 1 and buffer sizes up to the size the underlying transport
supports.
The SPX protocol supplied by Microsoft in Windows NT 4.0 is subject to slowdowns when running to
itself, that is, with loopback.
for RTP, TCP, or UDP
TCP/IP software is provided as part of the network support in the Windows NT operating system.
Microsofts Service Pack 3 for Windows NT 4.0 fixes several TCP/IP bugs and is required for IP
Multicast; it is strongly recommended for users of Windows NT 4.0. We recommend always using
the latest service pack. See "Microsoft's WinSock 2 Software" on page 30 in the Configuring Chariot
in Your Network chapter for more information.
Quality of Service (QoS) support for RTP, TCP, and UDP is part of Microsoft Windows 98 and
Windows 2000 (now in beta). At the time of this writing, we are testing with Windows 2000 beta 3.
Its QoS support is much improved over Windows 98; it supports DiffServ as well as RSVP, and has a
number of bug fixes not in Windows 98.
IP Multicast and QoS are not required to operate the Chariot console. However, if you plan to run IP
Multicast or QoS tests using an endpoint installed on the same computer as the console, you must be
running an appropriate TCP/IP stack.

Hardware/Software Requirements for Windows 95/98


Heres what youll need to run the Chariot console on Microsoft Windows 95 or 98:

An x86 computer capable of running Microsofts Windows 95 or 98 well. This implies a CPU such
as an Intel 80386, 80486, a member of the Pentium family, or equivalent. A Pentium or better is
recommended.

At least 32 MBytes of random access memory (RAM)


The total RAM requirement depends on the RAM usage of the underlying protocol stack and the
number of concurrent endpoint pairs.

A hard disk with at least 16 MBytes of space available

A CD-ROM drive on the computer you are installing the Chariot Console.

A Web browser. Because Chariot online help is now in HTML format, you need a Web browser to
view the help. We recommend version 4.x (or later) of either Netscape Navigator or Microsoft
Internet Explorer.

For best results, we recommend using a color palette of at least 256 colors.
We also recommend that you get up-to-date with the latest Windows 95 or 98 service levels. See the
Troubleshooting chapter for more information on how to get the latest patches and Service Packs.
You also need compatible network protocol software:
for APPC
IBM Personal Communications AS/400 and 3270 for Windows 95 version 4.11 or higher (its short
name is PCOMM for Windows 95) is supported by the Chariot console. It runs on x86 computers
where its communications APIs are installed.

13

14

Chariot User Guide, version 3.1

for RTP/TCP/UDP and IPX/SPX


WinSock 2 software is needed to provide the latest protocol support needed for IPX, RTP, SPX, TCP,
and UDP. See "Microsofts WinSock 2 Software" on page 30 in the Configuring Chariot in Your
Network chapter for more information.
A recent version of WinSock 2 is integrated into Windows 98.
If you are using Windows 95, you must install its WinSock 2 package, available separately from
Microsoft. To download the WinSock 2 package, visit the following Microsoft Web site:
http://www.microsoft.com/windows/downloads/contents/updates/w95sockets2.
Quality of Service (QoS) support for RTP, TCP and UDP is part of Microsoft Windows 98 and
Windows 2000 (now in beta). At the time of this writing, weve seen that the QoS support in
Windows 2000 beta 3 is much improved over Windows 98. Check our Web site for the latest
information on Windows 98 service packs; we assume the bug fixes will be migrated from Windows
2000 to Windows 98 shortly.
IP Multicast and QoS are not required to operate the Chariot console. However, if you plan to run IP
Multicast or QoS tests using an endpoint installed on the same computer as the console, we strongly
recommend Windows 98 with the latest fixes, not Windows 95.

Installing Chariot 3.1 Over Chariot 2.2


If Chariot 3.1 is installed on a computer which already has Chariot 2.2 installed, the older Chariot version
is automatically uninstalled by the Chariot installation program.
If you install Chariot 3.1 in a different directory than Chariot 2.1, you may want to change the user
settings for directories to point to the new Chariot 2.2 locations. If you install the new version in the same
directory as your old version the information in the files that contain your endpoint information
(ENDPOINT.DAT) and your service quality names (SERVQUAL.DAT, which contains APPC mode names
and TCP/IP QoS templates) will be maintained. This will save you from reentering this information. If
you install the new version in a different directory than the current Chariot version, you can copy these
two files into the new directory and maintain that information in the new directory.

Running Console Setup for Windows 95, 98, and NT


Put the Chariot CD-ROM in your CD-ROM drive. Enter the following at a command prompt:
[drive:]
(change to your CD-ROM drive)
CD \[type]\CHARIOT\
SETUP

The first dialog, after Setup has loaded itself, asks you to enter your name and your company name.
The Select Destination Directory dialog lets you select where to install the console. We recommend
installing it on a local hard disk of the computer youre using. If you install on a LAN drive, the
additional network traffic may influence your test results. The default directory is \GANYMEDE\CHARIOT,
on your boot drive.
The console installation next goes through the following steps:
1.

Installing the scripts

2.

Installing the console

Installing Chariot

After the installation is complete, you can choose to install the endpoint. See the Performance Endpoints
manual for more information on installing the endpoints.
The Chariot installation checks to see that the Microsoft C runtime files already on your computer are at a
service level at least as recent as those that installed with Chariot. If these files are down level, Chariot
replaces them with newer copies.
After installing the files, the installation program creates a Chariot folder. Inside of that folder are three
icons: Chariot Console, Chariot Help, and Readme. The console can now be started, by clicking on the
Chariot Console icon.

For retail versions, you must register the product with the Ganymede Software Registration Center to
receive an authorization key. Contact information for the Ganymede Software Registration Center
can be found on the inside of your Chariot CD-ROM case. You may use the product in evaluation
mode for 15 days while you are requesting your authorization key. (See Changing Your Registration
Number on page 54 in the Operating the Console chapter for information on the console dialog that
lets you make changes after installation.)
Upon contacting the Ganymede Software Registration Center, you will be asked for a registration
number and a license code. The registration number can be found on the Registration Card you
received upon purchase. The license code is provided on the initial screen displayed when starting
Chariot. After providing this information to the Registration Center, you will receive an
authorization key that will enable the retail version of Chariot.

For evaluation versions, leave the Registration number field empty and press the OK button. The
evaluation period for Chariot is 15 days. Each time you start Chariot, the dialog displays the number
of days remaining in the evaluation period. After the evaluation period is over, you will not be able to
start Chariot without entering a registration number and authorization key. If you later purchase
Chariot, obtain a new authorization key, as described above. You do not have to reinstall Chariot
after you purchase the product.
Actions During Windows Installation

Heres what we do during the installation steps. Lets say you install Chariot into the directory named
GANYMEDE\CHARIOT. Chariot installation creates directories with the following structure:
GANYMEDE\CHARIOT

contains the executable programs, .DLL, and .DAT files


contains the help file used by the console
contains file README.TXT

GANYMEDE\CHARIOT\SCRIPTS

contains the predefined script files (with file extension .SCR)

GANYMEDE\CHARIOT\TESTS

an empty directory where you can save your Chariot tests

GANYMEDE\CHARIOT\HELP

contains all Chariot help files

15

16

Chariot User Guide, version 3.1

Removing the Chariot Package (Uninstall)


When you uninstall Chariot from a computer, all Chariot files are removed except files that you created,
or edited and renamed, such as test files or script files.
To remove Chariot:
1.

From the Windows Start menu, select the Settings submenu, and the Control Panel menu item.

2.

From the Control Panel, double-click on the Add/Remove Programs icon. The Add/Remove
Programs Properties dialog is shown.

3.

Highlight Ganymede Software Chariot and press the Add/Remove button. The Confirm File
Deletion message box is shown.

4.

To remove Chariot, press the Yes button. The Uninstallation program uninstalls Chariot.

5.

When the uninstall is complete, press the OK button. The Add/Remove Programs Properties dialog
is shown. Press the OK button.

All system registry entries are removed when uninstalling. However, user registry entries are not
removed. For example, if you customize information on the user settings notebook and then uninstall
Chariot, your user settings are restored when you reinstall Chariot. To remove the registry entries, open
REGEDIT, highlight the key HKEY_CURRENT_USER\Software\Chariot, and press the DELETE key.
All registry entries will be removed.

Removing the Chariot Package Manually


You might run across situations where you need to remove the Chariot Console manually. For example, if
specific files are corrupted, the uninstall program will not function correctly.
To manually remove Chariot:
1.

At a command prompt enter the following commands:


CD\[Directory Chariot is installed in ]
DEL /Q /S CHARIOT
RMDIR /Q /S CHARIOT

2.

Open REGEDIT.

3.

Highlight the Chariot folder in HKEY_LOCAL_MACHINE\SOFTWARE\GANYMEDE\ and press the


Delete key.

4.

Highlight the Chariot folder in


HKEY_LOCAL_MACHINE\SOFTWARE\MICROSOFT\WINDOWS\CURRENTVERSION\UNINSTALL and

press the Delete key.

Installing Chariot

Relinquishing or Transferring Your Chariot License


If you wish to relinquish your Chariot license or transfer your license to another user or computer, you
must first deregister your Chariot license. To deregister the Chariot console on your computer, select the
Deregister button on the Registration tab under Change user settings.
When you deregister the product, you will be provided with a deregistration key, which is uniquely tied to
your registration key and license code. These 3 numbers are saved together in an ASCII text file in your
Chariot directory, named DEREGISTER.DAT, since these numbers are not provided to you again.
To fully relinquish your license and make it available for new users or new computers, you must contact
the Ganymede Software Registration Center and provide them with both your registration number and
deregistration key. To register afresh, repeat the steps described in Running Console Setup for Windows
95, 98, and NT on page 14; you will see that you will be working with a new license code. Contact
information for the Ganymede Software Registration Center can be found on the inside of your Chariot
CD-ROM case.

17

18

Chariot User Guide, version 3.1

Configuring Chariot in Your Network

Configuring Chariot in Your Network


Chariot runs as a set of application programs, using the available application programming interfaces
(APIs) for all of its communications. These are the same interfaces that applications like Web browsers
and FTP use. Chariot does all of the configuration for its own programs dynamically, so that you
shouldnt have to update the configuration files for your communications software.
This version of the Chariot console can run tests between endpoints using the following network protocols
(different endpoint operating systems support different protocols):

APPC
IPX
RTP
SPX
TCP
UDP

You can use a different protocol to get from the console to Endpoint 1 than you use between a pair of
endpoints. Other protocols, to be supported in the future, may be shown when you see the list of available
network protocols in the console listboxes. You can create tests using these protocols, but the tests wont
run unless the network protocols are supported on the respective endpoints.
The Edit Console to Endpoint 1 dialog lets you choose to Use Endpoint 1 to Endpoint 2 values when
connecting from the console to Endpoint 1. (Theres a similar checkbox in the User Settings notebook.)
If youre using a datagram protocol and youve checked this box, the console uses the corresponding
connection-oriented protocol for its connection to Endpoint 1. So, if you choose the IPX protocol for the
connection between Endpoint 1 and Endpoint 2, the default is to use SPX between the console and
Endpoint 1. Similarly, if you choose the UDP or RTP protocol for the connection between Endpoint 1 and
Endpoint 2, the default is to use TCP from the console to Endpoint 1.
Thus, here are the protocols supported between the consoles and Endpoint 1:

APPC
SPX
TCP

Chariots support of datagram protocols, such as IPX and UDP, can emulate the behavior of datagram
applications that must provide reliable delivery of data or RTP can only be used to emulate multimedia
applications which send a stream of data. For datagram protocols, reliable delivery between endpoints is
not provided by the protocol, but by the endpoint itself. See the Working with Datagrams and Multicast
Support chapter on page 33 for more information.

19

20

Chariot User Guide, version 3.1

Chariot, as a set of network applications, expects your network hardware and software to be set up and
running correctly. This chapter guides you through verifying this at the console. Be sure to read the
corresponding chapter for each endpoint youre using in your tests. Here are the key tasks:
1.

Determine the network addresses of the computers to be used as the console and the endpoints in
Chariot tests,

2.

Select a service quality (if appropriate), and

3.

Check out the network connections between the pairs of computers.

Lets look at each of the protocols to see how to accomplish these tasks:

see "APPC Configuration" on page 20


see "IPX and SPX Configuration" on page 26
see "RTP, TCP, and UDP Configuration" on page 27

APPC Configuration
This section provides selected information about configuring APPC. If you are new to configuring APPC,
start with the guidance provided by the APPC network software youre using. Chariot consoles support:

APPC on Windows NT with IBM Communications Server or Personal Communications (the second
product is also known as PCOMM)

APPC on Windows NT on Microsofts SNA Server

IBM has created a thorough (but aging) guide to setting up APPC across a variety of its platforms. This
guide is called the MultiPlatform Configuration Guide (you may hear it called MPCONFIG), and is
available for download from the Internet and from CompuServe. Here are the file names to look for:
MPCONT.ZIP
MPCONP.ZIP
MPCONB.ZIP
MPCONF.ZIP

MPCONFIG in ASCII text format


MPCONFIG in PostScript format
MPCONFIG in BookManager format
The sample configuration files referenced in the Guide.

Heres where you can download these files:


from the Internet
Use FTP to download from the following directory:
ftp://networking.raleigh.ibm.com/pub/appc_appn/config/
The files in this directory are stored as compressed data (ZIP files) and must be decompressed using
an appropriate application (such as WinZIP).
from CompuServe
These files are located in Library 4, Technical Papers of the APPC Forum on CompuServe (GO
APPC).
A fully-qualified logical unit (LU) name is the easiest network address to use with Chariot. Its called
fully-qualified because it consists of two parts, identifying the network and the computer. It is
constructed by concatenating the SNA network name, a period, and a control point (CP) or LU name.
You need to determine the LU name of each endpoint used in your APPC tests. The ways of finding a
computers local LU name vary among operating systems.

Configuring Chariot in Your Network

APPC on Windows NT with IBM Communications Server or


Personal Communications
This section describes the steps for setting up APPC at the Chariot console, for use on Windows NT with
IBM Personal Communications (PCOMM) or IBM Communications Server.

Determining the APPC Network Address


Selecting a Service Quality (APPC Mode Name)
Testing the APPC Connection

Determining the APPC Network Address


To determine the fully-qualified LU name of any computer running PCOMM or Communications Server
for NT do the following:

Start the SNA Node Operations program by either running PCSNOPS.EXE from a command prompt
or by clicking on the icon.

If the node is not currently started, select Operations...Start Node.

The first panel displayed should be the Node panel which displays a value entitled FQCP Name. If
this is not visible, select View...Select Resource Attributes and select it for viewing. A default fullyqualified LU name is automatically configured and it has the exact same name as the FQCP Name
shown in this panel.

A fully-qualified LU name is the easiest network address to use with Chariot. Although you can define
multiple LUs at one computer, the default LU name (determined above) is the one on which the endpoint
listens for a connection from the console.

Automatically Starting APPC


The Communications Server and PCOMM software as they are installed do not automatically restart their
SNA stacks after a reboot. Thus, in the default installation, Chariot wont be able to use APPC until you
manually start the stack.
IBM does provide a way to start the stacks automatically. To save you digging through their
documentation heres how to do it:

for Communications Server for Windows NT:


Either place a shortcut to CSSTART.EXE into the Startup folder or issue csstart -a once from a
command line to enable autostart.

for PCOMM for Windows NT or Windows 95:


Place a shortcut to autostrt.exe into the Startup folder.

APPC on Windows NT with Microsofts SNA Server


This section describes the steps for setting up APPC at the Chariot console, for use with Microsofts NT
SNA Server.
SNA Server does not support local loopback, which means that both Endpoint 1 and Endpoint 2 (or the
console and one of the endpoints) cannot reside on the same computer.

21

22

Chariot User Guide, version 3.1

Determining the APPC Network Address


Microsoft uses a split-stack to provide its APPC support in SNA Server; this is different from the singlestack APPC support provided on IBM systems (like AS/400, MVS, or IBMs PCOMM stacks). The
APPC support in SNA Server is split between two software components: the Server and the Clients.
APPC applications reside on the Client computers and make the usual APPC calls: ALLOCATE,
SEND_DATA, RECEIVE, DEALLOCATE, and so on. However, these calls are transmitted to the SNA
Server computer (the Server) to which theyre connected, and executed there. Its the SNA Server
computer that sets up LU-LU sessions to other APPC computers in the network.
OS/2
MVS
...
|
|
|
|
|
|
|
|
Windows NT SNA Server
computer
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
WinNT
WinNT
WinNT
WinNT
Client Client Client Client
A
B
C
D

APPC (LU 6.2) sessions

Links to Windows NT
(IP, IPX, Named Pipes connections)
...

On each of the Client computers, you install Microsofts Windows NT Client for SNA Server. The
endpoint for Windows NT programs can then use the APPC protocol on these computers, in addition to
other network protocol support that may be installed, such as TCP or SPX. The APPC calls at the
endpoints are encapsulated and forwarded to the SNA Server, which executes the calls and returns the
results to the endpoint program at the Client.
Chariot can run APPC tests with these different combinations of connections:
1.

Client to Client

2.

Client to SNA Server

3.

Client to a non-Windows NT computer

4.

SNA Server to SNA Server

5.

SNA Server to a non-Windows NT computer

Microsofts SNA Server is more fragile than we would have expected. The last two of these connections
are much more robust than the first three. Dont expect to run a Chariot test from Client to Client with
more than about 3 pairs, and not for more than about 30 seconds. (Client connections over TCP/IP are
slightly more robust than those over IPX or Named Pipes.) The test can fail in a variety of ways,
including gradual slowdowns, abends, and lockups. Errors may be logged at the SNA Server. The SNA
Server and/or Clients may need to be restarted, or the computer may need to be rebooted.
All the LUs reside at the SNA Server. The person administering the SNA Server creates a list of LUs
there. An LU list consists of one or more LUs definitions, each with a different LU name. Generally, any
of the Clients can use any of the LUs in its list. This means, in a normal setup, youre not sure what LU a
Clients application program is using at any time. This makes it difficult to execute reasonable Chariot
performance tests, where youve provided LU names for the Endpoint 1 and Endpoint 2 addresses.

Configuring Chariot in Your Network

For example, you might want to create a test that runs from MVS to Client C. When the endpoint
program is started (that is, when Chariots Windows NT endpoint service starts) on Client C, it begins
making APPC calls to listen for incoming tests. When it makes its first APPC call, its SNA Server
assigns it an LU from the list. Since endpoint programs may be running on multiple Clients, youd like to
precisely (and repeatably) know which LU is being used by Client C.
Indeed, with this method, Chariot might be connected to one computer during the first part of the
initialization phase but be connected by SNA Server to another computer for the rest of initialization,
leading to serious errors returned by the endpoints.
There appears to be only one method to associate a specific LU from the list of available LUs to a specific
Client computer, letting you create correct and repeatable tests. This involves putting Define TP entries
into the Windows NT registry on each endpoint computer defining the specific LU name that the endpoint
wants to be associated with. This is how Chariot operates.
Chariot makes Define TP entries during installation, when you are prompted for the APPC LU alias.
They can be made at a later time, using the SETALIAS Chariot command-line program.

Defining Pairs of LUs


Since SNA Server uses a pre-APPN (otherwise known as Low-Entry Networking or LEN) implementation
of APPC, all LU-LU pairs that will be used for testing with Chariot need to be explicitly defined in the
SNA Server configuration. That is, for each local LU that was created to correspond to a computer in
your network, it must be partnered with another LU with the configuration program before the two LUs
will be able to communicate with each other. SNA Server differentiates partnering a local LU with a
remote LU from partnering a local LU with another LU. This is discussed separately below.

Partnering Local LUs with Remote LUs


To partner all local LUs with all remote LUs, there are three steps.
1.

Set up which modes will be automatically defined on these connections: select any remote partner and
view its properties, select the Partners... button, select the Modes... button, select which mode
names you would like to use and ensure the Enable Automatic Partnering box is Xd for each.

2.

Set up which partners you would like to automatically partner: select the remote LUs, view their
properties, and ensure the Enable Automatic Partnering box is Xd for each.

3.

Verify that the Enable Automatic Partnering box is Xd for each local LU.

To verify that the partnering is correct, select the Partners... button when viewing remote LU properties;
a full list of all LUs that are partnered is shown.

Partnering Local LUs with Other Local LUs


SNA Server 3.0 automatically partners all local LUs with each other, so there is no configuration
needed. With SNA Server 2.11, however, there is no automatic partnering feature; each local LU-local
LU combination that you wish to test with must be defined separately. To do this, select a local LU, view
its properties, choose the Partners... button, then choose the Add... button to add every LU-LU-Mode
combination you require for testing. As they are added, they appear in the list for verification.

23

24

Chariot User Guide, version 3.1

When local LUs are partnered with remote LUs, local LUs running on SNA Server Client computers can
initiate a connection to a remote computer. However, since SNA Server is not an APPN
implementation, if a remote computer wishes to initiate a connection to one of the LUs defined in SNA
Server, it needs to have configuration information in its own APPC configuration program, defining a
connection to the SNA Server computer for each LU to which it will connect.

Connecting from the Console to Endpoint 1


When the Windows NT console uses APPC (that is, the Edit Console to Endpoint 1 dialog is set to APPC
when adding or editing a pair), the console program requires that a unique APPC LU alias be configured
for the Chariot consoles exclusive use. Use SNA Servers Administration program to configure this LU
as you would configure any other local LU in your network. By default, the Chariot installation requires a
LU alias named CHARIOT be set up for the consoles exclusive use.
If you have licensed two or more NT Consoles and will use them in the same SNA Server network, each
additional console will need a unique LU for that consoles use, and will need to override the installation
default of CHARIOT. This can be done using the Windows NT Registry editor (REGEDT32) to modify
the following entry:
SYSTEM\CurrentControlSet\Services\SnaBase\Parameters\TPs\GANYRPRT\Pa
rameters\LocalLU

This should contain the installed default value CHARIOT and can be changed to another unique LU
alias of your choosing. This is the LU alias the console program uses. The consoles LU alias must be
different from the LU alias that the endpoint programs uses (even though they may be running on the
same computer).

Selecting a Service Quality (APPC Mode Name)


Most networking protocols have some mechanism to let applications tell the network what kind of service
it requires. APPC does this through the mode definition. Several modes come predefined on most APPC
products. These include:
APPC Mode Name

Description

#INTER

Interactive data, high priority, no security

#INTERSC

Interactive data, high priority, secure connections only

#BATCH

Batch data, low priority, no security

#BATCHSC

Batch data, low priority, secure connections only

For many tests, these modes are sufficient. However if you are trying to emulate a particular APPC
application, you should select the same mode name that it uses.
These pre-defined modes are defined with session limits of 8. This means that you can only have 8
sessions at a time, between a pair of computers, using the same mode name. If youre attempting to run
more than 8 sessions using the same mode between a pair of computers, we recommend creating a new
mode on both computers, with a session limit larger than 8.

Configuring Chariot in Your Network

Reaching APPC Session Limits


APPC modes have a limit on the number of LU 6.2 sessions that can use that mode at the same time,
between a pair of computers. When you reach this limit, attempts to start new sessions will fail, with the
sense data value XFFFE0016.

Using Secure Modes


If you use modes that are defined as secure, such as #BATCHSC or #INTERSC, you can only get a
connection over links that are defined as secure.
Use secure modes only when you know that your network is configured to provide that level of service.

General APPC Topics


These are topics weve found common to all the APPC platforms.

Testing the APPC Connection


Now that you know the LUs and modes you are using, you can run a quick test using a program named
APING or WINAPING. WINAPING provides the same function as APING, but in a graphical version.
We explain how to use the APING program below, but the same names and concepts apply to
WINAPING.
APING is a test application packaged with most APPC stacks. It is similar to PING in TCP/IP; it is an
echo program that sends a block of data to another computer. That computer receives the data and sends
it back. APING verifies that APPC is correctly installed at a pair of computers, that they are connected to
the network, and that it is possible to get an APPC session using the mode you have selected.
To run APING, enter the following at a command prompt:
APING lu_name -M mode_name -N

Substitute the lu_name and mode_name with the actual names you determined in the steps above. (In
Chariot, the mode name is known as the service quality.) If APING works, it shows a table of timing
information. This endpoint should be ready for APPC testing with Chariot. Continue verifying
connections to the other endpoints in your test.
Although APING is packaged with most APPC software platforms, only a few automatically configure the
receiving side. If you run APING and get a return code of TP_NOT_AVAILABLE or
TP_NOT_RECOGNIZED, APING is probably not configured on the other computer. The good news is
that you did get a connection and that APPC is set up and running on both computers. Thus, even though
APING was not configured, you are able to connect and converse with the remote computer, so Chariot
should be able to use the remote endpoint for APPC testing.
If you get any other APPC return code, you probably have a configuration problem somewhere. You
should correct this before starting to run Chariot.
Be sure to test the APPC connections between the console and each of the Endpoint 1 computers youre
reaching via APPC. Also test the APPC connections between each Endpoint 1 and Endpoint 2 in pairs
youre testing with APPC. Dont test the connection from the console to Endpoint 2, since Chariot
consoles do not contact Endpoint 2 computers directly.

25

26

Chariot User Guide, version 3.1

APPC TP Name
APPC applications use an LU name to decide which computer to connect to in a network. They use a TP
name to decide which application program to connect to within a computer.
Chariot uses the string GANYMEDE.CHARIOT.ENDPOINT as its TP name. This TP name is used
when communicating with endpoints via an APPC connection.

Using APPN in a LAN Environment


If you are using APPN in a LAN environment, make sure you have a connection network defined in your
APPN configurations. This definition should be done at each APPN end node on the LAN. An APPN
connection network lets two computers on the same LAN talk directly to one another. Without the
connection network, all traffic goes through the network node, slowing down APPC communications.

IPX and SPX Configuration


To use the IPX or SPX protocol in Chariot tests, IPX addresses must be supplied as the network address at
the console when adding an endpoint pair. IPX addresses consist of a 4-byte network number (8
hexadecimal digits) followed by a 6-byte node ID (12 hex digits). The network number and node ID are
separated by a colon. The 6-byte node ID (also known as the device number) is usually the same as the
MAC address of the LAN adapter youre using.
If you already know the IP address of a computerand thus can PING to that computerits easy to find
its MAC address. First, PING to the target computer, using its IP address. Then, enter the following
command:
arp -a

A list of recently cached IP addresses is shown, along with their MAC addressesif they are LANattached.
Its tedious to enter IPX addresses when adding new endpoint pairs. When using the IPX or SPX protocol
in your tests, Chariot can maintain an easy-to-remember alias in the Edit Pair dialog. You can set up the
mapping once, and use the alias names ever after. From the Tools menu, select the Edit IPX/SPX
Entries menu item. The underlying file, named SPXDIR.DAT, is like the HOSTS file used in TCP/IP, or
the LU alias definitions offered with APPC.
For Windows 95, 98, and NT, Chariot makes WinSock version 1.1 Sockets-compatible calls when using
the IPX or SPX network protocol. For NetWare, Chariot makes calls to the TLI API when using IPX or
SPX.

Configuring Chariot in Your Network

Determining your IPX Network Address (Windows NT)


To determine a Windows NT computers local IPX address, enter the following at a command prompt:
IPXROUTE CONFIG

If your IPX software support is configured correctly, your output looks like the following (this output is
taken from Windows NT 4.0):
NWLink IPX Routing and Source Routing Control Program v2.00
net 1: network number 00000002, frame type 802.2, device AMDPCN1
(0207011a3082)

The 8-digit network number is shown first; here, its 00000002. The 12-digit node ID is shown in
parentheses at the end; here its 0207011a3082, which is our Ethernet MAC address. Thus, the IPX
address to be used in tests is 00000002:0207011a3082.

RTP, TCP, and UDP Configuration


The RTP, TCP, and UDP protocols use TCP/IP software for network communications. TCP/IP offers two
forms of network addresses: IP addresses and domain names. An IP address is a 32-bit numeric address.
It is represented in dotted notation as a set of four numbers separated by periods, such as 199.72.46.202.
The alternative, domain names, are in a format that is easier to recognize and remember, such as
www.Ganymede.com. To use domain names, you need either a Domain Name Server (DNS) set up in
your network or an /ETC/HOSTS file on each computer.

Determining your IP Network Address


The method for determining the IP address of a given computer varies among operating systems.

Finding Your IP Address in Windows NT


To determine a Windows NT computers local IP address, enter the following command:
IPCONFIG

If your TCP/IP stack is configured correctly, your output looks like the following (this output is taken from
Windows NT 4.0):
Windows NT IP Configuration
Ethernet adapter AMDPCN1:
IP Address. . . . . . . . . : 44.44.44.3
Subnet Mask . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . : 44.44.44.254

Its local IP address is shown in the first row; here its 44.44.44.3.

27

28

Chariot User Guide, version 3.1

You can also find your IP address using the graphical user interface. Select the Control Panel folder, and
double-click on the Network icon. The installed network components are shown. Double-click on
TCP/IP Protocol in the list to get to the TCP/IP Configuration. Your IP address and subnet mask are
shown.
To determine a Windows NT computers local host name, enter the following command:
HOSTNAME

The current host name is shown in the first row.


From the graphical user interface, return to the TCP/IP Protocol configuration. Select DNS (Domain
Name System) to see or change your domain name. If the DNS Configuration is empty, avoid using
domain names as network addresses in Chariot; use numeric IP addresses instead.
The default location for the /ETC/HOSTS file on Windows NT is:
d:\path\SYSTEM32\DRIVERS\ETC\HOSTS

where d: and path are the drive and path where you installed Windows NT.

Finding Your IP Address in Windows 95 or 98


The easiest way to find the local IP address on a Windows 95/98 computer is to enter the following at an
MS-DOS command prompt:
WINIPCFG

Users of TCP/IP on other operating systems may be familiar with the NETSTAT command:
NETSTAT -N

This displays a line of text for each active connection. The local IP address is in the second column of
each row.
You can also find and change your IP address using the graphical user interface. From the Start icon,
select Settings. Select the Control Panel folder, and double-click on the Network icon. The installed
network components are displayed.
Double-click on TCP/IP to get to the TCP/IP Properties. Select the IP Address page to see or change
your local IP address. Select the DNS Configuration page to see or change your domain name. If the
DNS Configuration is empty, avoid using domain names as network addresses. Use numeric IP addresses
instead.

Selecting a Service Quality (for RTP, TCP, and UDP)


Chariot can take advantage of the Generic Quality of Service (QoS) support provided by WinSock 2 on
Windows 2000 and Windows 98. You enable QoS by entering the name of a QoS template as the Service
Quality field when creating endpoint pairs using RTP, TCP, and UDP.
A Quality of Service (QoS) template name tells the network what kind of service the connection requires.
The QoS template name refers to a template that is either predefined on the endpoints or created at the
console and distributed to the endpoints when a test is started.

Configuring Chariot in Your Network

The initialization flows before the endpoints begin executing the test do not use QoS; QoS is not
supported between the console and Endpoint 1. The QoS support begins when Endpoint 1 and Endpoint 2
start executing the script.
If a QoS template is predefined at the endpoints, it is defined there as part of the software known as the
"Windows QoS Service Provider." The following templates are provided with Windows 98 and Windows
2000 beta 3; we recommend using one of them, if it makes sense for the application you are emulating.
QoS Template Name

Description

G711

pulse code modulation (PCM), a common method of voice encoding

G723.1

dual-rate 6.3/5.3-Kbps voice encoding scheme

G729

voice encoding scheme which produces high-quality in a low data rate

H261CIF

common video codec used with image sizes of 352 x 288 pixels

H263CIF

common video codec used with communications channels that are multiples of 64 Kbps
and image sizes of 352 x 288 pixels

H261QCIF

common video codec used with image sizes of 176 x 144 pixels

H263QCIF

common video codec used with communication channels that are multiples of 64 Kbps
and image sizes of 176 x 144 pixels

If you need a QoS template different from these, you can create your own at the Chariot console; templates
are distributed to the endpoints as part of the initialization of a test run. See Working with Quality of
Service (QoS) Templates on page 57 in the Operating the Console chapter for information on creating
QoS templates at the Chariot console.
QoS templates are saved in CSV form in the SERVQUAL.DAT file at the consolenot in the test file. (The
SERVQUAL.DAT file is located in the directory where the Chariot console is installed). If you want to run
tests at a different Chariot console using a custom QoS template, take a copy of your SERVQUAL.DAT file
and extract the portions you need.
To use Quality of Service today, the RSVP protocol must be enabled on the router interface. For
information on how to enable the RSVP protocol, refer to the documentation for your router.
We have tested QoS with Cisco routers running IOS version 11.3.5. If you're testing QoS with Cisco
routers, we recommend that you use this version or later of the IOS software. If you do not have version
11.3.5, the following bug fixes are necessary for QoS testing.

CSCdk28283: non-SBM aware routers running only RSVP should not process SBM messages.
CSCdk27983: RSVP should not drop messages with 10xxxxxx class objects.
CSCdk27475: RSVP should not drop policy object in RESV message.
CSCdk29610: RSVP should not reject RTEAR message without flowspec.
CSCdk38002: Router crashes while forwarding RSVP Path-Err message.
CSCdk38005: Router does not forward RESV messages with Guaranteed service flowspec. This fixes
the main problem we encountered, which caused tests with Guaranteed service types to not work at
all.

We have found in our testing that if a router in the path between two endpoints rejects the QoS request
after the test has started running, the endpoint is not aware of this and the test will continue to run. The
error messages are returned asynchronously from the router and the traffic will be treated as Best Effort.
This situation can occur if a router in the path does not have the necessary bandwidth to fulfill the request.

29

30

Chariot User Guide, version 3.1

Trying Out the TCP/IP Connection


Ping is a simple utility program, included in all TCP/IP implementations. To check the connection from
Windows NT to another computer, enter:
PING xx.xx.xx.xx

Replace the xs with the IP address of the target computer, that is, the computer youre trying to reach.
On Windows NT, if Ping returns a message that says Reply from xx.xx.xx.xx:..., the Ping worked. If it
says Request timed out., the Ping failed, and you may have a configuration problem, a network problem,
or the target computer may not even be powered on.
For more details about the Ping command, enter:
PING -?

If youre unable to reach the target computer using Ping, the TRACERT command may help you
determine how far packets can get through the network. TRACERT tries to find whether each hop in the
IP network can be reached, on the way to the target computer. Be aware that TRACERTs results arent
necessarily repeatable, since a different route can be taken by each packet thats sent.

Sockets Port Number


TCP/IP applications use their network address (as described above) to decide which computer to connect
to in a network. They use a Sockets port number to decide which application program to connect to
within a computer.
Chariots sockets port for RTP, TCP, and UDP is 10115. This port number is used during the
initialization of a test; during the actual running of the test, other port numbers are used. If the Chariot
script specifies port_number=AUTO on the CONNECT_ACCEPT command, additional ports are
dynamically acquired from the protocol stack. Otherwise, the endpoint issuing the CONNECT_ACCEPT
commands (usually Endpoint 2) uses the port number specified in the script.

Microsofts WinSock 2 Software


You see the term "WinSock 2" frequently in this documentation. Microsoft's WinSock 2 software is their
latest implementation of the IPX, SPX, TCP, and UDP protocol stacks for their 32-bit operating systems.
We are currently dealing with four different versions of the WinSock 2 software. These four versions
differ in their function and capacity, which can cause some confusion. If you have a choice, always work
from the latest version of WinSock 2 available for your Windows 95, 98, or NT platform.

Configuring Chariot in Your Network

Heres a brief summary of the four versions in active rotation here at this writing. They are listed in order
from most recent to oldest.
WinSock 2 Version

Description

Windows 2000 beta 3

Supports IP Multicast, and QoS with RSVP and ATM. Supports hundreds of
simultaneous connections. This is the latest and best that weve seen.

Windows 98

Supports IP Multicast, and QoS with RSVP (but no ATM signaling). Supports about 50
simultaneous connections. Much improved over Windows 95, but not as good as
Windows 2000 beta 3. Watch for upcoming fixes and improvements.

standalone Windows 95
WinSock 2 package

Supports IP Multicast, but no QoS. Supports about 50 simultaneous connections.


Download this package from the Microsoft Web site, and apply it over Windows 95.

Windows NT 4.0
service pack 3

Supports IP Multicast, but no QoS. Supports hundreds of simultaneous connections.


Very stable, but problems when connecting with stacks on other operating systems.

31

32

Chariot User Guide, version 3.1

Working with Datagrams and Multimedia Support

Working with Datagrams and


Multimedia Support
In this chapter we describe datagram and multimedia support. We discuss the concept of datagrams, and
describe Chariots support for reliable datagram delivery. We next discuss how multimedia applications
are emulated. Chariot uses streaming scripts together with datagram protocols to create data streams not
requiring reliable delivery. The send_data_rate script variable lets you adjust the rate at which the
endpoints send data. Lastly, we discuss IP Multicast, which uses UDP or RTP, streaming scripts, and
Sockets calls to emulate the delivery of data from a sending application to a group of receivers.

Understanding Datagram Support


IPX and UDP are connection-less protocols. RTP is an application-level protocol which is independent of
the underlying transport. Chariot RTP support uses UDP for transport. Endpoints provide datagram
support to allow testing with these protocols. This section discusses the endpoints datagram support for
reliable delivery. See the following sections for more information:

Network Applications: Connection-oriented vs. Connection-less on page 33


How Endpoints Emulate Reliable Datagram Delivery on page 34
Tuning Your Tests to Emulate Reliable Datagram Applications on page 35

Network Applications: Connection-oriented vs. Connection-less


Programs that communicate over a network follow a protocol for the exchange of data. A connectionoriented protocol provides reliable delivery of data, but at the cost of relatively expensive (in terms of
time) initialization and termination procedures.
A connection-less, or datagram, protocol provides a best-effort delivery service: the network tries to
deliver application data to the recipient, but if there are problems along the way, the data is lost.
Moreover, the application is not notified of the loss. In spite of the unreliable nature of datagram
protocols, theyre frequently used by network applications because they dont incur the overhead
associated with the establishment and takedown of connection-oriented streams. A datagram protocol
works best with applications that use short transactions. For long-running transactions, a stream protocol
is more efficient and can overcome the connection overhead.

33

34

Chariot User Guide, version 3.1

Datagrams work like two people exchanging letters via the postal service: theres no guarantee letters
arrive in order or at all. Without any additional work, this situation is unacceptable for many
applicationsthose which require reliable delivery of data. If they use datagrams, those types of
applications must follow an approach that ensures the data is properly exchanged. Such an approach
typically requires the use of:

Acknowledgments, to let the sender know the partner has received data.

Timers, so the sender can retransmit its data if it doesnt receive an acknowledgment from the partner
soon enough.

A flow control mechanism, to prevent the sender from flooding its partner with too much data.

Other applications, such as multimedia applications, do not require acknowledgments or timers. They
typically send data in a steady stream at a rate that does not flood the partner. They can usually
accommodate some data that is out of order or lost due to network congestion. Applications such as video
applications, audio applications, and stock ticker applications do not need confirmation that the data has
been received. Chariots multimedia support lets you emulate these types of applications. See
Understanding Multimedia Support on page 37 for more information.

Understanding Reliable Datagram Support


Reliable datagram support is used by Chariot for all scripts except streaming scripts. If all data sent by
one endpoint is not received by the other endpoint, the test will fail. This support emulates applications
that cannot tolerate loss of data.

How Endpoints Emulate Reliable Datagram Delivery


Endpoints use a straightforward datagram protocol.
1.

A windowing scheme is used as the flow control mechanism: a sender sends at most a certain amount
of data before waiting for an acknowledgment from the receiver.

2.

The sender waits for a period of time (the retransmission timeout period) to receive an
acknowledgment from its partner. If the acknowledgment does not arrive in time, the sender
retransmits the window of unacknowledged data.

3.

The receiver sends an acknowledgment when:

A window is filled.
All of the data is received for an application script RECEIVE command.
An acknowledgment is not sent immediately after all the data of a RECEIVE is received. Instead, the
acknowledgment is sent when the next API call is issued (unless the next command is RECEIVE, in
which case the endpoint keeps receiving until the window is full).
This is a common way for peer reliable datagram applications to keep traffic to a minimum. It also
means that the endpoint sends one less datagram when a RECEIVE is followed by a SEND.
4.

If the receiver detects lost datagrams, it sends an acknowledgment indicating how much of the
window was received in sequence, thereby letting the sender retransmit what wasnt received.

Working with Datagrams and Multimedia Support

When an endpoint sends and receives data as fast as possible, the endpoints timeout mechanism is
sufficient to detect when datagrams are lost or late. However, if a non-zero SLEEP command is specified
in the application script, the endpoint must send a datagram to the partner endpoint indicating that it will
not be sending or receiving data for the length of time indicated by the SLEEP. The partner endpoint
must acknowledge the condition by sending back its own datagram. These datagrams are counted in the
datagram statistics for the connection.

Tuning Your Tests to Emulate Reliable Datagram Applications


The following sections explain the two ways you can tune your datagram testing for a connection:

Modifying Application Script Parameters on page 35


Modifying the Datagram Parameters for a Connection on page 35

You should use the application script which most closely matches the application you want to emulate.
For example, a typical datagram application is a file server program (such as NFS, which uses UDP, or
NetWare, which uses IPX). These can be emulated with the File Send Long Connection or File Receive
Long Connection script.
The Packet Blaster scripts are not intended to emulate a frame throwerapplication scripts require
reliable delivery of data (which implies the use of acknowledgments), whereas frame throwers do not. See
the Messages and Application Scripts manual for information on setting the delivery rate for data.
There are some application scripts which do not run over an IPX network with their default values. These
scripts are intended to emulate applications which send large amounts of data using the TCP protocol.
They are not intended to be used between endpoints in an IPX network, because IPX does not support
fragmentation and reassembly of large amounts of data. Note, however, that the scripts will work if you
change the send buffer_size values in the scripts to prevent fragmentation or if the protocol is SPX.

Modifying Application Script Parameters


Modify the send and receive record and buffer sizes using values of the application being emulated. The
send and receive buffer sizes determine the size of the datagram thats sent over the network. In general,
always set these to the same size.
However, the default value is inefficient for pairs of endpoints which have different default send and
receive buffer sizes. For example, the DEFAULT send buffer size for IPX on OS/2 is less than that for
IPX on NetWare. In this case, its best to set the send and receive buffer sizes to the number that is the
smaller of the two DEFAULT values.

Modifying the Datagram Parameters for a Connection


There are three parameters which affect how endpoint datagram support behaves. The parameters have
the following defaults:
Window Size: 1500 bytes
Retransmit Timeout: 200 milliseconds
Number of Retransmits before Aborting: 50
Set these parameters to be as similar as possible to the IPX or UDP application you are simulating. If you
want to determine what those values should be, here are some general guidelines to follow. Expect to do
some experimentation to find the best combination of variable settings.

35

36

Chariot User Guide, version 3.1

Window Size
The Window Size is the number of bytes that can be sent to the partner without an acknowledgment.
Window Size imposes flow control. Set this to avoid flooding the network or an endpoint with too many
datagrams. Sending more datagrams than the network can handle results in lost datagrams, and lost
datagrams can result in timeouts and retransmissions, which in turn means poor performance. If the
Window Size is too small, datagrams are much less likely to be lost, but performance wont be as good as
it would be otherwise.
The number of datagrams sent in a window can be calculated by taking the Window Size, dividing by the
scripts send_buffer_size, and rounding this up (endpoints wont send partially full datagrams until the
end of a SEND command). You can enter values in the range from 1 to 9,999,999 bytes for the Window
Size.
Setting this field to a small number causes more frequent checking to occur during SEND commands.
This decreases performance by forcing the sender to wait for acknowledgments and by increasing the
number of data packets flowing through the network. For example, if you set the window size to 4,096,
the sender seeks an acknowledgment from the receiver after each block of 4,096 bytes it has sent. You
can set the value larger, resulting in fewer pauses for acknowledgments, but if the acknowledgment
indicates a failure, a larger number of bytes will have to be re-sent.
If the Window Size is large, more datagrams may need to be retransmitted. In addition, memory usage is
a concern when making the Window Size larger. When a test begins, both endpoints in each connection
using a datagram protocol allocate enough memory to hold a complete window.
If the application being emulated supports datagram parameters, use the same values it is using.
Otherwise, choose a Window Size large enough for several datagrams (or more/less, depending on the
expected load on the network and the individual hosts) to be sent before an acknowledgment is required.
One approach is to begin low and gradually increase until either performance no longer improves or
retransmissions increase.

Retransmission Timeout Period


Datagrams can be delayed rather than lost. When this happens, the sending endpoint times out after not
receiving an acknowledgment within the Retransmission Timeout Period. If this timeout parameter is set
too low, an endpoint behaves as if a datagram was lost, when it really just hasnt been acknowledged yet.
The Retransmission Timeout Period field is the number of milliseconds the sender will wait, after sending
for the first time or retransmitting a block to the receiver, to receive an acknowledgment that the block
was received. For example, if you set the retransmission time-out period to 1000 milliseconds and a
retransmission is necessary because something wasnt acknowledged, the sender will resend its original
block and wait 1 second to hear from the receiver whether it was received. You can enter values in the
range from 1 to 99,999 milliseconds (about 100 seconds) for the retransmission time-out period.
If you set this period too small for the network youre using, the sender will resend blocks of data that the
receiver has actually already receivedits just taken a bit longer for the acknowledgment to be returned
back to the sender. This causes unnecessary traffic.
If the Retransmission Timeout is set too large, the endpoint will wait too long before deciding to
retransmit a lost packet. This can significantly degrade performance in networks in which packets are lost
frequently.

Working with Datagrams and Multimedia Support

This parameter should be at least the average round-trip time between the two endpoints, not counting
overhead for processing the datagrams at the endpoints. For example, if the two endpoints are connected
via satellite, use 2 * 270ms plus some overhead depending on the speed of the computers. A value of 10
or 20 milliseconds may be appropriate for two endpoints directly connected on a LAN, such as Ethernet.
Our suggestion is to set the Retransmission Timeout to twice the propagation delay of your network. This
is equivalent to the response time measured by the Inquiry, Long Connection (INQUIRYL) script.
The timeout parameter should be increased to account for the number of pairs expected to use the network
concurrently. A low timeout parameter can lead to many endpoints each retransmitting at the same time,
resulting in yet more congestion of the network.

Number of Retransmits before Aborting


The Number of Retransmits before Aborting field is used to put a cap on the number of retransmits. The
Number of Retransmits before Aborting field is the number of times the sender will resend a block of data
for which an acknowledgment is not received. If you set this number too high, the sender may generate
needless traffic on the network when theres been a real failure. If you set this number too small, the
sender may declare a connection failure and end the test when theres really only a momentary period of
congestionthe connection is otherwise okay. You can enter values in the range from 1 to 999 for the
number of retransmits before aborting the connection.
To calculate how long the endpoint waits before ending a test, multiply the Number of Retransmits before
Aborting by the Retransmission Timeout Period. On an extremely congested network, an application may
not be able to complete even one transaction because datagrams are lost in the network. In this case, its
best for the application to abort as an indication that the network is too congested and needs to be fixed.

Other Factors Affecting Datagram Performance


The network packet size, sometimes called the frame size or MTU, is the maximum amount of data a
network adapter card can transmit at one time. When the size of a datagram exceeds the packet size, the
network protocol stack must break the datagram into pieces, each no larger than the packet size. This
process is called datagram fragmentation. The process of putting the packets back together, which is done
at the destination computer, is called datagram reassembly.
IPX doesnt support datagram fragmentation and reassembly; each datagram can be no larger than the
largest datagram supported over the entire route between endpoints. That is, if IPX computers on one
LAN segment support 1467 byte datagrams, but only 512-byte datagrams are supported on the next hop,
any datagram over 512 bytes is dropped.
TCP/IP supports fragmentation and reassembly of higher-layer protocols such as UDP and TCP. If the
TCP/IP network protocol stack can reassemble datagram fragments faster than the application software
(such as the endpoints) can issue API send calls, your tests can run faster if you configure
send_buffer_size as large as possible.
On the other hand, a large number of datagram fragments may increase the congestion in a network and,
therefore, the likelihood that one of them may be dropped. When that occurs, the entire datagram must be
retransmitted, causing poorer performance.
To avoid fragmentation, the datagram size must be smaller than the packet size. Endpoint datagram
supports uses a 9-byte header. UDP has an 8-byte header; IP has a 20-byte header; IPX has a 30-byte
header. So, for example, if youre using UDP, the datagram size is 9+8+20+send_buffer_size; for IPX,
its 9+30+send_buffer_size.

37

38

Chariot User Guide, version 3.1

Your network administrator can tell you the packet size. The packet size is limited by the type of physical
network: Ethernets packet size is around 1500 bytes, and Token Rings is approximately 4096 bytes.
The packet size also depends on the network software. For example, some versions of IPX dont support
more than 512 bytes, even on an Ethernet.

Understanding Multimedia Support


The endpoints provide the ability to emulate multimedia applications. This section describes Chariots
multimedia support, which involves streaming scripts and datagram delivery without acknowledgments.

Delivering Data: Unicast, Broadcast, and Multicast


There are three ways to transmit data over a network:

unicast
broadcast
multicast

In unicast delivery, an application sends data from a single source to a single destination on the network.
An example of unicast delivery is a telephone call (before the invention of three-way calling and
conference calling). This type of communication is between two points and the data (which in this case is
conversation) flows between only these two points.
Broadcast delivery lets you send data to all destinations, regardless of whether or not the receivers want to
receive the data. Radio is an example of broadcast delivery. Radio programs are sent through the air
waves, regardless of whether anyone is listening to the broadcast. The broadcast is accessible by everyone
with a radio set. You do not need to request to receive radio shows.
In multicast delivery, an application sends data to a single address, called a multicast address. The routers
in the network decide where to deliver the data based on whether other applications are listening. A
benefit of multicast is that multiple unicast connections do not have to be set up. Another benefit is that,
unlike broadcasting, the data is only sent to destinations where applications are listening and want to
receive the data. An example of multicast is video and audio conferencing. In this use, members of the
group are subscribed before the conference. During the conference, the video or audio is only received by
the members of the group.

How Endpoints Emulate Multimedia Applications


Most multimedia applications stream data to their receivers, without expecting acknowledgment of
delivery. Chariot multimedia support is based on sending a stream of data between two or more (in the
case of IP Multicast) endpoints without acknowledgments or retransmissions. Data may be sent as fast as
possible or at a defined data rate. This data rate is specified on the send_data_rate parameter of the
SEND verb in the script. You can also vary the file size (that is, the interval between timing records) and
the buffer size of the data to send.
Chariot provides multimedia support for IPX, RTP, and UDP, since these are connection-less protocols.
These protocols support streaming of data. IP Multicast tests can use UDP or RTP. The Chariot
multimedia support for the RTP protocol uses UDP as the transport protocol.

Working with Datagrams and Multimedia Support

To emulate a multimedia application, select a streaming script. When you run a streaming script in a test,
data is sent in only one direction. Throughput and lost data are calculated by Endpoint 2. If RTP is used,
Endpoint 2 also in addition to lost data, calculates jitter. Because of the nature of unreliable protocols,
lost and out of order data can occur. This does not cause multimedia pairs to fail.
Streaming scripts have a fixed format:

There is no acknowledgment from Endpoint 2 that data has been received


You cannot add commands to these scripts.

However, you can modify variable values, such as the send_data_rate. See the Messages and Application
Scripts manual.
For pairs running streaming scripts, test results contain information about the data quality. The Lost
Data tab in the Test window lets you see how much data was lost during the run. If you are using RTP,
the Jitter tab in the Test window shows jitter data. Graphs of lost data are associated with the Jitter tab,
so you can graphically view how much or when the data was lost. You can use the Datagram tab to view
the number of lost or out of order datagrams.

Modifying the Multimedia Run Options for the Test


There are two Run Options that affect how multimedia support behaves at endpoints. These parameters
apply to all multimedia pairs in a test. The Run Options are:

Receive Timeout (in milliseconds)


IP Multicast Time To Live

Set these parameters to be as similar as possible to the multicast application you are simulating. If you
want to determine what those values should be, here are some general guidelines:
1.

Receive Timeout is the number of milliseconds the endpoint issuing a RECEIVE command waits
before determining that a script has ended. If the data has not been received in this amount of time,
endpoints send a notification to the sender that the data was not received.
Receive Timeout is used for both multimedia pairs and multicast groups. This value is configured on
the Datagram tab of the Run Options notebook.
If you set this number too low and the transmission encounters normal network delays, the receiver
may time out while the data is still transmitting. If you set this number too high, the receiver may
spend unnecessary time waiting for a transmission that has failed.
If Endpoint 2 is using Windows 95, 98, or NT, the minimum receive timeout value is 500
milliseconds.

2.

IP Multicast Time To Live (TTL) controls the forwarding of IP Multicast packets. (See IP
Multicast on page 42 for more information.) Set this TTL value based on how far you want the data
forwarded. Expect to do some experimentation to find the best combination of variable settings.
This field defaults to 1. A value of 1 means that the packet does not leave the senders local subnet.
If you want to route the packet across a router, you must set the value of this field to at least 2. A
general rule is to set the TTL to one more than the number of router hops to the farthest endpoint in
the multicast group.

39

40

Chariot User Guide, version 3.1

We found that with Microsofts TCP/IP stacks on Windows 95, 98, or NT, a TTL value of 0 still lets
multicast packets leave the local host. Thus, if you run a loopback test with one computer, you may
impact the performance of your network, as the packets are broadcast on the local subnet.
See Changing the Run Options on page 78 in the Operating the Console chapter for details on these two
fields.

Other Factors Affecting Multimedia Performance


The network packet size, sometimes called frame size, is the maximum number of bytes a network adapter
card can transmit at one time. When the size of a datagram exceeds the network packet size, the protocol
stack must break the datagram into pieces, each no larger than the packet size. This process is called
datagram fragmentation. The process of putting the packets back together, which is done at the
destination computer, is called datagram reassembly.
We have seen cases where some platforms do not reassemble fragmented datagrams correctly. In these
cases, setting the send_buffer_size to less than the frame_size usually resolves the problem.
Typical multimedia applications use various packet sizes. When emulating such an application, you
should account for any header that may be included in the size of the packet. Endpoint multimedia
support uses a 9 byte header for UDP and a 12 byte header for RTP. In addition to these headers the
protocol stack adds on 8 bytes for the UDP protocol and 20 bytes for IP.
For example, if you are using UDP and the desired packet size is 512 bytes, the send buffer size should be
475 (512 9 8 20=475). If you are using RTP and the desired packet size is 512 bytes, the send buffer
size should be 472 (512 12 8 20=472).

RTP Configuration
You can support the Real-time Transport Protocol (RTP) for pairs or multicast groups using streaming
scripts. Many of the leading voice and video applications are using RTP as their framework for
communications. RTP is an Internet standard, documented in RFC 1889. You can use RTP in both
unicast and multicast pairs.
RTP is a UDP datagram with an additional 12-byte header. The 12-byte RTP header contains fields such
as payload type, sequence number, and timestamp.
RTP does not provide any mechanism to ensure timely delivery or provide QoS guarantees, but relies on
lower-layer services to provide these. RTP does not guarantee delivery or prevent out-of-order delivery. It
also does not assume that the underlying network is reliable and delivers packets in sequence. The
sequence numbers included in RTP allow the receiver to reconstruct the senders packet sequence.
Sequence numbers might also be used to determine the proper location of a packet, for example in video
decoding, without necessarily decoding packets in sequence.
In addition, RTP is a key component of the H.323 specification. H.323 is a standard for audio, video, and
data across IP networks, such as the Internet. See the Primer on H.323 Series Standard on
http://www.databeam.com/h323/h323primer.htm for more information on H.323.
A benefit of using RTP is you can perform tests to determine the impact of RTP header compression
performed by many routers. With a RTP header of 12 bytes, UDP header of 8 bytes, an IP header of 20
bytes, the size of an RTP header usually is 40 bytes. This is equal to the size of the average audio payload.

Working with Datagrams and Multimedia Support

When you compress the header information, the total header size is reduced to 3 or 4 bytes. To enable the
RTP header compression option on your routers, refer to their documentation.
RTP flows use even port numbers. The recommended value for the port number is between 16384 and
65535. If you select the AUTO value for the port_number variable, Chariot uses an even port number in
this range.

Understanding Jitter Measurements


When a packet is sent, the sender creates a timestamp in the packet. When the packet is received, the
receiver creates another timestamp. These two timestamps are used to calculate the transit time for the
packet, which is the amount of time it took for the packet to get from the sender to the receiver. If the
transit times for packets within the same test are different, the test contains jitter. Jitter is an estimate of
the statistical variance of the transit times.
The amount of jitter produced in a test depends on the degree of difference between the transit times for
the packets. If the transit time for all packets is the same (no matter how long it took for the packets to
arrive), the test contains no jitter. If the transit times differ slightly, the test contains some jitter. If the
transit times vary widely, the test contains a lot of jitter.
You can view jitter statistics for pairs which use the RTP protocol. Jitter lets you see a short-term measure
of network congestion. It also can show the effects of the queuing elements within the network. The jitter
value is reset for each timing record, so the jitter statistic for a specific timing record shows the jitter for
that timing record only.
The interarrival jitter (J) is defined as the mean deviation (smoothed absolute value) of the difference (D)
in packet spacing at the receiver compared to the sender for a pair of packets. As shown below, this is the
equivalent to the difference in the relative transit time for the two packets; the relative transit time is the
difference between a packets RTP timestamp and the receivers clock at the time of arrival, measured in
the same units. If Si is the RTP timestamp from packet I, and Ri is the time of arrival in RTP timestamp
units for packet I, then for two packets I and j, D may be expressed as:
D(i,j)=(Rj-Ri)-(Sj-Si)=(Rj-Sj)-(Ri-Si)

The interarrival jitter is the jitter calculated continuously as each data packet (I) is r eceived from the
source. The jitter is calculated according to the formula defined in RFC 1889:
J=J+(D(I-1,I)-J)/16

Jitter is measured in timestamp units and is expressed as an unsigned integer. Whenever the endpoint
creates a timing record, the current value of J is sampled. This algorithm is the optimal first-order
estimator and the gain parameter 1/16 gives a good noise reduction ratio while maintaining a reasonable
rate of convergence.
Almost all data transfers have jitter. However, the amount of jitter and the relationship of the jitter to the
throughput indicates if the jitter is causing a problem on the network. Jitter can be caused by two
patterns. One pattern is the delay time for each packet steadily increases. In this case, the jitter values
increase and the throughput decreases. Another pattern is when the jitter increases, but the throughput
remains constant. In this case, the delay variation varies widely. This could cause problems for some
delay sensitive applications, but there is not a noticeable decrease in throughput.
Various elements in the network can cause jitter. When troubleshooting jitter, first run a test to
benchmark the amount of jitter received when just using the TCP/IP stack. Then run a test and add
network elements such as a router to determine which element is causing the jitter.

41

42

Chariot User Guide, version 3.1

Another cause of jitter is router queuing algorithms. The combination of the queuing algorithm in the
router with the network configuration could cause jitter. To troubleshoot, trying running tests using
different queuing algorithms on the router.

IP Multicast
Chariot supports testing of IP Multicast. In IP Multicast delivery, an application sends data to a single
address, called a multicast group address. The routers in the network decide where to deliver the data,
based on whether other downstream applications are listening. The benefit of IP Multicast is that it avoids
multiple unicast connections to deliver data to multiple receivers. Unlike broadcasting, the data is only
sent to destinations where applications are listening and want to receive the data.
IP Multicast uses UDP to deliver data from one sender to multiple receivers. IP Multicast testing requires
Network Performance Endpoints at version 3.1 (or later). Any computer designated as Endpoint 1 can
send data to a group of multiple Endpoint 2 computers with a single UDP or RTP data stream. The
sender does not guarantee delivery of the data to the receivers.
Most multimedia applications stream data to their receivers, without expecting acknowledgment of
delivery. Chariot multimedia support is based on sending a stream of data between two or more
endpoints, without acknowledgments or retransmissions. Data may be sent as fast as possible or at a
controlled data rate. This data rate is controlled using the send_data_rate parameter of the SEND
command in the application script. You can also vary the file size (that is, the interval between timing
records) and the buffer size of the data to send.
In an IP Multicast group, receivers must subscribe to the multicast group prior to receiving data. The
multicast group is identified with the IP Multicast address and port. The IP Multicast address specifies
the multicast group to which data should be delivered. This class D IP address falls in a specified range.
The IP Multicast port identifies one of the possible destinations within a given host computer. Chariot
uses the combination of the multicast address and multicast port to uniquely identify a multicast group.
In IP Multicast delivery, you can constrain how far an IP Multicast packet is forwarded. Routers use the
Time To Live (TTL) value to determine when to stop forwarding packets. TTL is a router hop count, not
a time duration. Set the TTL so the sending endpoint can reach all the receiving endpoints in the group.
This should be the value used by the sender in the multicast application you are emulating.
To emulate a multimedia application, select one of the multimedia scripts in Chariot. When you select a
multimedia script, data is sent in only one direction. Throughput and lost data are calculated by Endpoint
2. Because of the nature of an unreliable protocol, lost and out-of-order data can occur. This does not
cause multimedia pairs to fail.
The Chariot test results contain information about the data quality, for pairs running multimedia scripts.
The Lost Data tab in the Test window lets you see how much data was lost during the run. Graphs of lost
data are associated with this tab, so you can graphically view how much or when the data was lost. You
can use the Datagram tab to view the number of lost or out of order datagrams.

Working with Datagrams and Multimedia Support

Emulating IP Multicast Applications


Endpoints use the IP Multicast address and port to send data to all members in a multicast group. In tests
containing multicast groups, Endpoint 1 acts as the sender and the computers in the role of Endpoint 2 act
as the receivers. You can set the TTL used for the IP Multicast packets. You can also change the default
timeout used by the Endpoint 2 computers. This timeout simulates the buffering done by receivers of
multicast data; after some amount of time, they have to decide that the sender is no longer sending.
To emulate an IP Multicast application, first set up a multicast group at the Chariot console. Select the
Add Multicast Group menu item from the Edit menu in the Test window. Within a test, you can
configure multiple groups to emulate different applications, sending data to multiple sets of addresses.
The IP Multicast address and port combination must be unique for each multicast group in a test.
Valid IP Multicast addresses are 224.0.0.0 through 239.255.255.255 (these are the class D IP addresses).
Chariot does not allow a multicast group address that is not in this range.
Some class D addresses are reserved and should be avoided. For example, addresses between 224.0.0.0
and 224.0.0.255 are reserved for routing protocols and other low-level topology discovery or maintenance
protocols. Chariot lets you specify any IP Multicast address. However, when testing with a reserved IP
Multicast address, be aware that other applications, hosts, or routers may be transmitting data to this
address. Unexpected test results may occur. We recommend using addresses beginning with 225.0.0.0 or
higher.
Because UDP and RTP are unreliable protocols, some data may be lost. At the end of a test, the Endpoint
2 computers report the number of bytes and datagrams lost during the script. The pair does not fail while
running due to lost data.
Heres a simple example of IP Multicast testing, with one multicast group consisting of three computers
(each labeled Endpoint 2, below).

43

44

Chariot User Guide, version 3.1

The key flows in the above picture are numbered and described below.
1. A test is created at the Chariot console, and the user presses the Run button. The console sends the
setup information to Endpoint 1, using a TCP connection. The setup includes the following:

the application script (a streaming script),


the specific IP address of each Endpoint 2 in the multicast group,
the protocol to use when connecting to Endpoint 2 (UDP or RTP),
the Quality of Service (QoS) template to use (if any),
the multicast group address (a class D IP address) that each Endpoint 2 should use while the test
is running,
how long to run the test, and
how to report results.
2. Endpoint 1 keeps its half of the application script, and forwards the other half to each Endpoint 2 in
the multicast group, using a TCP connection. It also sends them the multicast group address on
which they should receive while the test is running.
When all Endpoint 2 computers have acknowledged they are ready, Endpoint 1 replies to the console
on its TCP connection. When all endpoint pairs are ready, the console directs them all to start.
3. Endpoint 1 executes its multimedia script as the UDP or RTP sender, with the Endpoint 2 computers
receiving the data. Endpoint 1 collects timing records; each Endpoint 2 collects information on lost
data.
4. Endpoint 2 computers return information on lost data to Endpoint 1. Endpoint 1 returns this
information and timing records to the console, which displays the results.
You should use a script which most closely matches the application you want to emulate. With IP
Multicast, you must use a streaming script. See the Messages and Application Scripts manual for more
information on the streaming scripts. If the streaming scripts provided do not meet your needs, you can
create a new streaming script. Select the Script Editor menu item from the Tools menu on the Main
window and then select the New menu item from the File menu. From the New Script dialog, select the
Streaming script template for the new script. See the Operating the Console chapter on page 47 for more
information on creating a new script and modifying script parameters.
Avoid trying to use a single Endpoint 2 in multiple places in a single multicast group. This includes
multiple IP addresses or multiple domain names that actually represent the same computer. The endpoint
attempts to bind to the same port, which will result in a Communications Error during the test setup.
Due to the nature of IP Multicast, data packets from a test containing multicast groups could disrupt other
applications running on the network. The test may time out in some cases or stop before completion in
other cases. See "Adding or Editing a Multicast Group" on page 71 in the Operating the Console chapter
for more information on setting up a multicast group.
If you want the endpoints to verify that the data received matches the data sent, select the Validate data
upon receipt checkbox on the Run options tab of the Run options dialog. See Changing the Run
Options on page 78 in the Operating the Console chapter.
For extensive information on IP Multicast, visit the web site at http://www.ganymede.com/. Click on the
Links button, then choose the Multimedia/VoIP topic.

Working with Datagrams and Multimedia Support

Setting Up Your Hardware and Software For IP Multicast


Before you run a test containing multicast groups, you must do the following to prepare your hardware
and software for IP Multicast.

Verify That Your TCP/IP Protocol Stack Supports IP Multicast


Endpoints provide IP Multicast support on platforms with the necessary protocol stack support. To run IP
Multicast, you need Network Performance Endpoints at version 3.1, running on one of the following
operating systems (or later)

Digital UNIX v4.0B


HP-UX v10.10
IBM AIX v4.1
IBM OS/2 Warp 4 with TCP/IP v4.1
Linux kernel v2.0.32
Microsoft Windows 3.x with Chameleon v7.0
Microsoft Windows 98 (or Windows 95 with WinSock 2)
Microsoft Windows NT v4.0 or 2000 (x86 and Alpha)
Novell NetWare 4.x
SCO UnixWare v7.0
SGI IRIX v6.2 with patches
Sun Solaris v2.4 (x86 and SPARC)

Configure Your Router to Enable IP Multicast Support


Many of todays routers have IP Multicast support built in. However, this support is not automatically
enabled. To run tests containing multicast groups across a router, you must first configure the IP
Multicast support, which enables IP Multicast data to be forwarded by your router. (You obviously do not
need to perform this step if you are running a test containing multicast groups without a router.)
For information on how to enable and configure your routers IP Multicast support, refer to their
documentation.

Verify the Router Operating System is IP Multicast Enabled


Each router has an embedded operating system. Verify that this operating system is enabled for IP
Multicast support. See the documentation for your router for more information. The router should be
updated with the latest ROM, EEPROM, BIOS, or microcode revision level.

Verify Your Routers Have Enough RAM


Providing support for IP Multicast routing increases the amount of RAM required by a router. Routers
maintain additional routing tables (discussed below) to decide how to forward IP Multicast packets. See
the documentation for your routers to determine the amount of RAM required for IP Multicast
applications. If necessary, add additional RAM before running tests containing multicast groups.

45

46

Chariot User Guide, version 3.1

Make Decisions on Routing Table Algorithms


There are three families of IP Multicast routing algorithms: DVMRP, M-OSPF, and PIM. Your network
administrator must decide which routing algorithms to implement for the routers in your network. This is
important so that your routers can communicate IP Multicast routing information with each other.
DVMRP
The Distance-Vector Multicast Routing Protocol (DVMRP) forwards packets based on the source of
the subnetworks location. This is the only algorithm that has its own unicast routing protocol.
M-OSPF
The Multicast Extensions to Open Shortest Path First (MOSPF) uses Intra-area routing inside an
OSPF. This algorithm uses source-based trees.
PIM
There are two Protocol-Independent Multicast routing algorithms: PIM-DM and PIM-SM.
Protocol-Independent Multicast Dense Mode (PIM-DM) communicates IP Multicast information to
groups that a located in a small geographic area.
Protocol-Independent Multicast Sparse Mode (PIM-SM) communicates IP Multicast information to
sparsely distributed groups. One of the goals of this algorithm is to limit traffic by only providing
data to routers interested in the information.

Operating the Console

47

Operating the Console


The console contains three windows: the Main window, Test windows, and the Comparison window. Each
window has its own menu bar and set of functions that make sense for that window.
This chapter is divided into six sections:

The Main Window

The Test Window

The Comparison Window

Working with the Error Log Viewer

Working with the Script Editor

File Types and How They are Handled

The sections on the Main window, the Test windows, and the Comparison window are broken into subsections
corresponding to their menu items. These, in turn, are divided into discussions of tabs and dialogs. This
hierarchical breakdown makes this chapter a little hard to read from front to back; use the Table of Contents,
the Index, and online searching to find specific topics.

Creating and Running Tests: An Overview


Creating and running a network performance test consists of these steps:
1.

Creating a Chariot test file


Enter the network addresses of the endpoint pairs for the test, along with the protocols and scripts to
execute between each pair.

2.

Running a test file


When running a test, the console connects to each of the Endpoint 1 computers, and directs them, in turn,
to connect to each of their Endpoint 2 partners. Each pair then executes its respective script, and Endpoint
1 of each pair reports its results back to the console.
Use the Chariot console to run a test and save its results, or use the RUNTST command (see "RUNTST
Running Tests" on page 111 in the Using the Command-line Programs chapter for more information).

3.

Viewing the results


You can choose to see the results accumulate as the test is running (real-time), or you can wait and see the
results after the test completes (batch). You can save a test and its results, and later go back and view
them.
Use the Chariot console to view the results of a test, or use the FMTTST command (see "FMTTST
Formatting Test Results" on page 112 in the Using the Command-line Programs chapter for more
information).

48

Chariot User Guide, version 3.1

Start the Chariot console program either by double-clicking on the Chariot Console icon in the Chariot folder,
or by entering the following at a command prompt:
d:\path\CHARIOT [test_filespec]

where d: and path are the drive and path where you installed the Chariot console. You can optionally enter the
filespec for an existing Chariot test file; Chariot loads that file as it starts itself.

Lets look at each of the windows, and how they help you with a test.

The Main Window


The Main window is the first window you see, with the Chariot logo. You can create or open as many tests as
youd like, by selecting the File menu and choosing New or Open. Or, just press the New or Open icons.
For more information on using Chariot, see the following:

Welcome
Introducing Chariot
Operating the Console
Tips for Testing
Messages manual
Application Scripts manual
Performance Endpoints manual

In the Main window you can choose the Options Menu to change the defaults used by all the tests. It has two
menu items:

Change user settings shows a notebook where you can change the defaults used in the dialogs for creating
and running a test.
Change display fonts lets you change the font used in the Test window.

Choose the Tools menu to customize Chariot. It has six menu items:

Compare Tests lets you compare the results of multiple Chariot tests
Edit Scripts lets you modify existing scripts or create a new script
Edit Output Templates lets you save print options in a template
Edit IPX/SPX Entries lets you save IPX/SPX Entries in a template
Edit QoS Templates lets you work with Quality of Service Templates
View Error Log shows you the error log for the endpoint or the Chariot console

Finally, use the Help menu to access Chariot online help.

Operating the Console

49

The File Menu (Main Window)


In the Main window, go to the File menu and select New or Open. New lets you create a test from scratch;
starting with the information for the first endpoint pair in the test. If youd rather start from an existing test
file, select Open.
The File menu also keeps a list of recently opened and saved Chariot test files. Up to 9 filespecs are
remembered in this recently-used file list; the most recent is numbered 1 while the ninth is numbered 9.
You can have many Test windows active at the same time. However, from among all your open Test windows,
only one test can be running at a time.

The Options Menu (Main Window)


The Options menu lets you tailor Chariot and change values that affect all your tests. Its first menu item is
Change user settings; when you select this, you get a notebook showing the pages of settings you can
change.
On each notebook page, there is an Undo button. Pressing the Undo button causes Chariot to clear any
changes you have made since you accessed the notebook page. All fields are returned to the values that were
selected when you accessed the notebook page. This button does not undo selections on other notebook pages
in the Change User Settings notebook.
The OK and Cancel at the bottom of the notebook let you confirm your changes. Pressing Cancel causes
Chariot to close the notebook and discard the changes youve made on all notebook pages.
Change font settings lets you change the fonts used in the Test window.

Changing the Endpoint Pairs Defaults


You may have one type of network protocol and script that you use most frequently between endpoint pairs.
Similarly, you may frequently use a different network protocol to connect from the console to the Endpoint 1
computers.
The Endpoint Pair Defaults notebook page lets you set the default network protocol, service quality, and
script. This makes it easier to add or change endpoint pairs. Whenever you add a new endpoint pair, it is
these values that are initially selected in the Network Protocol and Service quality pulldown fields. The
Default script box lets you choose which script is initially selected when you add an endpoint pair. Press the
Clear default script button if you dont want a default script.
See Adding or Editing an Endpoint Pair on page 70 in The Test Window section for more information about
these fields and how they affect a test.
If you change your mind, press Undo to reset all the fields to the values you had before you made any changes.

50

Char iot User Guid e, ver sion 3.1

Changing Default Directories


Thr ee dir ector ies affect t he files you d eal with when r unning Char iot.
Where to read and write test files
Each time you start the Chariot console and open a test file, the drive, path, and directory youve entered
in this field is used to prime the Open a test file dialog.
Where to read script files
This is where script files are loaded from, by default when you choose to Open a script file while editing
an endpoint pair.
Where to write console error logs
When an error occurs during a run or while cloning a test, an entry is written to the error log file. At the
console, this is file CHARIOT.LOG, CLONETST.LOG, or RUNTST.LOG, depending which program you were
running when the error occurred. The error log file is written to the drive, path, and directory listed in this
field. Also, see the Working with the Error Log Viewer section on page 96 for more information.
When you make changes to any of these directories, Chariot looks to see if the directory exists. If it doesnt
exist, you are asked if it is okay to remember what youve entered. You might choose to have any of these
fields point to a directory on a LAN drive, for example, but you are not attached to that drive when youre
updating these fields. Answer OK to the question; Chariot uses what youve entered, even if it cant find the
directory at the time. You should ensure that the drive is attached and the directory exists before running a
test.
The File menu on the Main window can keep a list of recently opened and saved Chariot test files. Up to 9
filespecs are remembered in this recently-used file list; you can set this value from 0 to 9, depending on how
large you want your pulldown menu to become.
If you change your mind, press Undo to reset all the fields to the values you had before you made any changes.

Changing the Default Run Options


Chariot gives you three ways to decide how a run completes, and two ways for the endpoints to report their
results. You can choose to have the endpoints polled on a regular period, and set this period (see Polling the
Endpoints on page 82 to decide whether you need to poll).
Each of these options affects what is measured in a test, and thus your performance numbers. This notebook
page lets you change the default values, used when you create a new test.
See Changing the Run Options on page 78 for information on changing run options.
If you change your mind, press Undo to reset all the fields to the values you had before you made any changes.

Oper ati ng the Console

51

Changing Your Datagram Parameters


This notebook page shows five fields used to tailor the behavior of Chariots datagram processing. The
IPX,RTP, and UDP protocols are connection-less (see Understanding Datagram Support on page 33 in the
Configuring Chariot in Your Network chapter for more information), so Chariot provides processing for the
retransmission of lost frames. Chariot does not retransmit in streaming cases.
The first three options on this notebook page do not apply to streaming scripts.

The Window Size is the number of bytes that can be sent to the partner, without an acknowledgment. The
number of datagrams sent in a window can be calculated by taking the Window Size and dividing by the
scripts send_buffer_size, and rounding this up (Chariot wont send partially full datagrams until the end
of a SEND command).

The Retransmission Timeout Period field is the number of milliseconds the sender will wait, after
sending for the first time or retransmitting a block to the receiver, to receive an acknowledgment that the
block was received.

The Number of Retransmits before Aborting field is the number of times the sender will resend a block
of data for which an acknowledgment is not received.

The following two options on this notebook page only apply to streaming scripts. See Modifying the
Multimedia Run Options for the Test on page 39 in the Working with Datagrams and Multicast chapter for
more information on these options.

The Receive Timeout field is the number of milliseconds the receiver waits before determining that the
streaming script has ended.
If Endpoint 2 is using Windows 95, 98, or NT, the minimum receive timeout value is 500 milliseconds.

The Multimedia Time To Live (TTL) field controls the forwarding of IP Multicast packets. Set the Time
To Live value based on how far you want the data forwarded. Expect to do some experimentation to find
the best combination of variable settings. If you change your mind, press Undo to reset all the fields to the
values you had before you made any changes.
This field defaults to a value of 1. However, the TTL value of 1 does not allow the packet to leave the
router. If you run a test with a TTL less than the number of routers, the test fails and you receive
CHR0216. You will need to adjust the TTL to run a test with a multicast group that crosses a router.

52

Chariot User Guide, version 3.1

Changing Your Throughput Units


Choose one of six ways to tailor the throughput numbers in your results to units that reflect your test
environment. In reading these values, remember that an uppercase K represents 1,024, while a lowercase
k represents 1,000. Similarly, an uppercase B represents bytes, while a lowercase b represents bits.
KBps

1,024 Bytes per second (the default)

kBps

1,000 Bytes per second

Kbps

1,024 bits per second (that is, 128 Bytes per second)

kbps

1,000 bits per second (that is, 125 Bytes per second)

Mbps

1,000,000 bits per second (that is, 125,000 Bytes per second)

Gbps

1,000,000,000 bits per second (that is, 125,000,000 Bytes per second)

We dont advocate changing your throughput units from KBps unless it is really necessary. Differing
throughput units can cause unexpected confusion when comparing results. Be especially careful that youre
using the same units when cutting and pasting exported values from different files.
These units do not affect other numbers, like transaction rate, response time, or relative precision.
On the Throughput notebook page of the Change User Settings notebook, you can press Undo to reset to the
units you had before you made changes.

Changing the Warnings


Some users like to be warned before they do actions with non-trivial side-effects. Other users, once theyre
comfortable with the software, dont want the annoyance of warning messages that keep popping up. This
notebook page lets you decide which warnings you see and which you dont.
Decide which warning messages are to be shown at the console.
Stopping a test
Do you want to stop this running test?
Clearing the results of a test
Do you want to change the test setup, which will cause the results to be erased?
Another operation clears the results of a test
Do you want to change the test setup, which will cause the results to be erased?
Deleting pairs from a test
Do you want to delete these endpoint pairs?
Abandoning a run
Do you want to abandon the endpoints, which may still be running and creating timing records?
Printing more than 25 pages
Do you want to print this many pages?

Operating the Console

53

Running a test with more than 10,000 timing records


Do you want to create a huge number of timing records, which can make the console very sluggish (or
even crash the operating system)? This can occur when you have more than 10,000 timing records (or
more than 20 pairs, where at least one pair has reported 500 timing records).
Saving a test or script file in an upgraded version
You opened a file from an earlier version of Chariot. Are you sure you want to save it as a newer version,
which cant be opened by older versions of Chariot?
Saving a test or script file in an older version
You are about to save a file in a format compatible with an older version of Chariot. Are you sure you
arent losing any capabilities that you may need in future testing?
A Pairs endpoint 1 address differs from its Console to Endpoint 1 address
The Endpoint 1 address entered differs from the console to Endpoint 1 address. You may need to change
one of these two values for the test to succeed.

Changing Your Output Defaults


This notebook page lets you select output template and CSV file format export options.
Output templates let you save printing options in a template. You can select the output templates to use as the
default on the Print/Export dialog. This template is shown in the Output Template field on the Print/Export
dialog for each new test.
Select [None] if you want to use the print options stored in the test.
In the Print Configuration default field, select the output template you want to show in the Print dialog when
you select the Print menu item.
In the Export to HTML Configuration default field, select the output template you want to show in the
Export dialog when you select the Export to HTML menu item.
In the Export to Text Configuration default field, select the output template you want to show in the Export
dialog when you select the Export to Text menu item.
Chariot lets you export test information to a spreadsheet output in the CSV file format. See Export Options
for CSV file on page 65 for information on the CSV file format. You can select the settings to use as the
default on the Export to CSV File dialog.
Select which aspects of the test to default to export:

Test summary and run optionsprovides a summary of any results and your run options

Pair summaryprovides pair information contained in the Test Setup and Results tabs of the Test Window

Pair detailsprovides the timing records for the pairs in your test

You can choose to report on all the pairs (Export all) or you can choose to report on only specific pairs or
groups of pairs (Export marked groups and pairs). This second radio button lets you choose the pairs that
have the mark symbol next to them, in the first column on the left-hand side of the Test Window.

54

Chariot User Guide, version 3.1

Changing Your Firewall Options


he Firewall Options notebook page lets you specify options for testing through firewalls. See Testing Through
Firewalls on page 138 in the Testing Tips chapter for more information.
The Console through firewall to Endpoint 1 section of the dialog lets you enter port numbers for testing
through a firewall to Endpoint 1. For IPX, RTP, SPX, TCP, and UDP, the Chariot console always uses a fixed
port number when initializing a test. You can choose the port numbers used when
1.

Running a test (for more information, see Running a Test on page 81 in The Test Window section)

2.

Returning timing records from Endpoint 1 to the console

We recommend using Auto (letting Chariot dynamically select the port number), unless your testing requires
that timing records use a specific port.
The Endpoint 1 through firewall to Endpoint 2 section of the dialog lets you select firewall options for
testing through firewalls from Endpoint 2. If you are testing through Network Address Translation (NAT)
firewalls, select the Use Endpoint 1 identifier in data option. See Testing Through Firewalls on page 138.
The endpoints will add a 4-byte correlator to the data sent. If you are testing through a firewall that does data
inspection, select the Use Endpoint 1 fixed port option. The endpoints do not send a correlator field in the
data.
Chariot does not currently support firewalls that do both NAT and data inspection.
If you change your mind, press Undo to reset all the fields to the values you had before you made any changes.

Changing Your Registration Number


You can change your unique Chariot registration number without reinstalling the console. The Registration
tab of the Change User Settings notebook is only shown in the retail version of Chariot.
Enter your registration number and its corresponding authorization key in the fields shown in this dialog.
Your registration number, supplied to you by Ganymede Software, is used to determine the maximum number
of endpoint pairs you can create and run in a test.
The combination of a valid registration number, license code, and authorization key are needed to convert an
evaluation version of Chariot into a retail version. You must register Chariot with the Ganymede Software
Registration Center to receive an authorization key. Contact information for the Ganymede Registration
Center can be found on the inside of your Chariot CD-ROM case. You may use the product in evaluation mode
for 15 days while you are requesting your authorization key.
Upon contacting the Ganymede Software Registration Center, you will be asked for a registration number and a
license code. The registration number can be found on the Registration Card you received upon purchase. If
you are an existing Chariot customer, you can find the registration number on the Registration tab within the
Change User Settings option. The license code is shown on the initial screen that is displayed when starting
Chariot. After providing this information, you will receive an authorization key that will convert your copy of
Chariot into a retail version.
If you change your mind, press Undo to use the registration number you had before you made any changes.
See Running Console Setup for Windows 95, 98, and NT on page 14 and Relinquishing or Transferring
Your Chariot License on page 17 in the Installing Chariot chapter for more information registering and
deregistering Chariot.

Operating the Console

55

The Tools Menu


The Tools menu provides you with a set of tools that let you customize Chariot.
The Compare Tests menu item accesses the Comparison window which lets you compare the results of
multiple Chariot tests. See The Comparison Window on page 92 for more information.
Use the Script Editor to modify existing scripts or to create a new script. If you access the Script Editor from
the Edit Scripts menu item, your modifications are saved on the file level and are available to all new pairs
associated with the script. If you want your modifications to only be available to the instance of a script
associated with a specific pair, press the Edit Script button on the Add Pair or Edit Pair dialog for the pair.
See the Working with the Script Editor section on page 99 for more information.
Chariot provides an Error Log Viewer to let you easily get more information on errors you receive while
running Chariot. To view an error log, select the View Error Logs menu item. See the Working with the
Error Log Viewer section on page 96 for more information.

Working with Output Templates


Output templates let you save printing options in a template. When you want to print a test, you can then
select the template containing the print options you want to use. This saves you the time of reselecting print
options each time you print a test.
To work with output templates, select the Edit Output Templates menu item from the Tools menu. The
Output Templates List dialog is shown. This dialog lists all output templates in Chariot. You can add new
templates, modify existing templates, delete templates, and copy templates from this dialog.
To delete an existing output template, highlight the output template you want to delete and press the Delete
button. The output template is removed from the Output Template List and deleted from Chariot.

Adding Output Templates


To add an output template, select the Edit Output Templates menu item from the Tools menu. The Output
Template List dialog is shown. Press the Add button. The Add Output Template dialog is shown.
Enter the name of the new output template in the Output Template field. This field is required to create a
new output template. The special characters *, \, and ? are not allowed in an output template name.
Select the print options you want to save in the output template.
To save your new output template, press the OK button. The Output Template List dialog is shown and your
new output template is shown in the list.

Modifying Output Templates


To modify an output template, select the Edit Output Templates menu item from the Tools menu. The
Output Templates List dialog is shown. Select the output template you want to modify and press the Modify
button. The Modify Output Template dialog is shown.

56

Chariot User Guide, version 3.1

Modify the options you want to change in the output template. See Print and Export Options on page 62 in
The Test Window section for more information on these options.
To save your changes to the output template, press the OK button. The Output Template List dialog is shown.

Copying Output Templates


To create a new output template based on an existing output template, select the Edit Output Templates menu
item from the Tools menu. The Output Templates List dialog is shown. Select the output template you want to
create a copy of and press the Copy button. The Copy Output Template dialog is shown.
Enter the name of the new output template in the Output Template field. This field is required to create a
copy of an output template. The special characters *, \, and ? are not allowed in an output template name.
Modify any options you want to change in the output template. See Print and Export Options in The Test
Window section for more information on these options.
To save your changes to the output template, press the OK button. The Output Template List dialog is shown.

Working with IPX/SPX Entries


It is tedious to enter IPX addresses; they consist of 8 hex digits, a colon, then 12 more hex digits. When using
the IPX or SPX protocol in your tests, youd like to be able to enter an easy-to-remember alias in the Edit Pair
dialog. Chariot maintains these aliases for you in a file at the console; you can set up the mapping once, and
use the alias names ever after. This file is like the HOSTS file used in TCP/IP, or the LU alias definitions
offered with APPC.
The IPX/SPX alias values are stored in the SPXDIR.DAT file which is located in the same directory as Chariot.
You can use this file to edit files or move the alias values to an installation of Chariot on another computer.
To work with IPX/SPX Entries, select the Edit IPX/SPX Entries menu item from the Tools menu. The
IPX/SPX Alias List dialog is shown. This dialog lists the current list of aliases. When you start Chariot the
first time, this list is empty. You can add new aliases, modify existing aliases, delete aliases, and copy aliases
from this dialog. This list is preserved when you upgrade to a new version of Chariot.
To delete an existing alias, highlight the alias you want to delete and press the Delete button. The alias is
removed from the IPX/SPX List dialog and deleted from Chariot.

Adding IPX/SPX Aliases


To add an IPX/SPX alias, select the Edit IPX/SPX Entries menu item from the Tools menu. The IPX/SPX
Alias List dialog is shown. Press the Add button. The Add IPX/SPX Alias dialog is shown.
Enter the name of the new alias in the Alias Name field.
In the Address field, enter an 8:12 or 12:8 hex address.
To save your new alias, press the OK button. The IPX/SPX Alias List dialog is shown and your new alias is
shown in the list.

Operating the Console

57

Modifying IPX/SPX Aliases


To modify an IPX/SPX alias, select the Edit IPX/SPX Entries menu item from the Tools menu. The IPX/SPX
Alias List dialog is shown. Select the alias you want to modify and press the Modify button. The Modify
IPX/SPX Alias dialog is shown.
Modify the IPX/SPX address. To save your changes to the alias, press the OK button. The IPX/SPX Alias
List dialog is shown.

Copying IPX/SPX Aliases


To create a new alias based on an existing alias, select the Edit IPX/SPX Entries menu item from the Tools
menu. The IPX/SPX Alias List dialog is shown. Select the alias you want to copy and press the Copy button.
The Copy IPX/SPX Alias dialog is shown.
Enter the name of the new alias in the Alias Name field. This field is required to create a copy of an alias.
If necessary, modify the IPX/SPX address in the Address field.
To save your new alias, press the OK button. The IPX/SPX Alias List dialog is shown.

Working with Quality of Service (QoS) Templates


Chariot can take advantage of the Generic Quality of Service (QoS) support provided by WinSock 2 on
Windows 2000 and Windows 98. You do this by specifying a QoS template name in the Service quality field
when adding RTP, TCP or UDP pairs. The QoS templates that you define are saved in the SERVQUAL.DAT file
at the console. See Selecting a Service Quality (for RTP, TCP, and UDP) on page 28 in the Configuring
Chariot in Your Network chapter for information on QoS templates and the SERVQUAL.DAT file.
To work with QoS templates, select the Edit QoS Template menu item from the Tools menu. The QoS
Templates dialog is shown. From this dialog, you can create a QoS template, modify an existing QoS template,
create a copy of an existing QoS template, and delete QoS templates.
If you use the same template name as a QoS template on the endpoint, the values configured on the QoS
Template Editor override the values of the template on the endpoint.
To create a new template, press the Add button. The QoS Template Editor is shown.
To modify an existing template, select the QoS template you want to modify and press the Modify button. You
can also double-click on the selected template. The QoS Template Editor is shown with the information from
the selected template.
To copy an existing template, select the QoS template you want to base the new template on and press the
Copy button. The QoS Template Editor is shown with the information from the selected template.
To delete a template, select the template you want to delete and then press the Delete button. The template is
removed from the QoS Template dialog.

58

Chariot User Guide, version 3.1

QoS Template Editor


If you are modifying an existing QoS template or copying an existing template, the QoS Template Editor shows
the information from the template you selected on the QoS Template dialog.
Name
Enter a unique name for the template. If there is a matching template found with this name on the
endpoint, then the values configured on this dialog override the values defined on the endpoint.
Service Type
Select the value of the level of service to negotiate for the flow from the Service Type field. The following
service types have been defined.
Service Type

Service Type Description

No Traffic

In either the sending or receiving QoS specification, this value indicates that there will be
no traffic in this direction. On duplex-capable media, this signals underlying software to
set up unidirectional connections only.

Best Effort

The service provider takes the QoS specification as a guideline and makes reasonable
efforts to maintain the level of service requested, without making any guarantees on packet
delivery. The network routers do not guarantee prioritization of the data.

Controlled
Load

The network router gives priority to the data and operates like the data is only data on the
network at the time. Thus, this service may assume:
A high percentage of transmitted packets will be successfully delivered by the network to
the receiving end nodes. (Packet loss rate should closely approximate the basic packet
error rate of the transmission medium.)
Transit delay experienced by a high percentage of the delivered packets will not greatly
exceed the minimum transit delay experienced by any successfully delivered packet.

Guaranteed

This service type value is designed for applications that require a precisely known quality
of service but world not benefit from better service, such as real-time control systems. The
service provider implements a queuing algorithm which isolates the flow from the effects
of other flows as much as possible and guarantees the application the ability to propagate
data at the Token Rate for the duration of the connection. If the sender sends faster than the
Token Rate, the network may delay or discard the excess traffic. If the sender does not
exceed the Token Rate over time, then Latency is also guaranteed.

General
Information

All service types are supported for this traffic flow.

No Change

In either sending or receiving QoS specifications, this level of service requests that the
quality of service in the corresponding direction is not changed. Select No Change when
requesting a QoS change in one direction only, or when requesting a change only in the
Provider Specific part of a QoS specification and not in the Sending Flowspec or the
Receiving Flowspec.

Operating the Console

59

The Token Rate field and the Token Bucket Size field describe the bandwidth, which is the rate at which a
stream of data can be sent. These fields are designed to efficiently accommodate transmissions that vary in
rate. The basic concept portrays a bucket that is filled with tokens at the specified token rate. Each token lets
an application send a certain amount of data.
Token Rate (bytes/sec)
This field defines the burst rate. If packets are sent out uniformly at the token rate, the bucket remains
empty. Each outgoing packet is matched by one token. If a packet is sent without a matching token, the
packet may be dropped. If the transmission rate is less than the token rate, the unused tokens accumulate
up to the token bucket size. The Token Rate field is expressed in bytes/second. A value of No Rate
indicates that no rate-limiting is enforced. If this is the case, the Token Bucket Size field is not
applicable.
To transmit without losing packets, the following must be configured:

Set the Token Rate field at or above the average transmission rate.

Set the Token Bucket Size field large enough to accommodate the largest expected burst of data.
If data is sent at a low rate for a period of time, it can send a large burst of data all at once until it runs out
of tokens. Afterwards, data may only be sent at the token rate until its data burst is exhausted.
Token Bucket Size (bytes)
This field controls the size of data bursts in bytes, but not the transmission burst rate. If packets are sent
too rapidly, they may block other applications access to the network for the duration of the burst. This
field is the largest typical frame size in video applications, expressed in bytes. In constant rate
applications, the Token Bucket Size is chosen to accommodate small variations. In video applications,
the token rate is typically the average bit rate peak to peak. In constant rate applications, the Token Rate
field should be equal to the Maximum Transmission Rate field.
Latency (microseconds)
Enter the maximum acceptable delay in microseconds between transmission of a packet by the sender and
its receipt by the intended receiver or receivers. The precise interpretation of this number depends on the
service type specified in the QoS request.
Delay Variation (microseconds)
This field contains the difference, in microseconds, between the maximum and minimum possible delay
that a packet experiences. This value is used to determine the amount of buffer space needed at the
receiving side in order to restore the original data transmission pattern.
Maximum Transmission Rate (bytes/sec)
This field is where you specify how fast packets may be sent back to back expressed in bytes/second. This
information lets intermediate routers efficiently allocate their resources. Some intermediate systems can
take advantage of this information resulting in a more efficient resource allocation. In this field, enter the
maximum packet size in bytes that is used in the traffic flow.
Maximum Transmission Size (bytes)
This field is where you specify the largest packet size, in bytes, that is permitted in your network. QoS
enabled routers use this in their policing of multicast network traffic.
Minimum Policed Size (bytes)
Enter the minimum packet size in bytes that will be given the level of service requested.

60

Chariot User Guide, version 3.1

The Help Menu


Select the Contents and Index menu item to get links to the following online manuals:

Chariot User Guide


Performance Endpoints manual
Application Scripts manual
Messages manual
Chariot Programming Reference

Select the General Help menu item to get descriptive information about the window you are currently viewing.
Select the Using help menu item for guidance on using the online help in Chariot.
Select the Keys help menu item for a list of all keys and key combinations available in each window.
Select the About Chariot menu item for details on the Chariot version and build level, and for information
about service and support.

Getting Information About Chariot


The About Chariot dialog contains copyright and release information.
Press the Support info button for Chariot technical support information.

Keys Help for the Main Window


You can use the following keys and key combinations in the main Chariot window, instead of using the mouse.
F1

get help for the Chariot Main window.

F2

get an index of all the available Chariot help topics.

F3

exit Chariot.

F9

show the keys and key combinations available in a window.

F10

get information about how to use operating system help.

F11

get the About Chariot dialog, which shows your version and build level, and lets you get
product support information.

Ctrl+N

set up a new Chariot test. This brings up a new Test window, and lets you immediately
begin defining the first endpoint pair in the test.

Ctrl+O

open an existing Chariot test file.

Alt+F4

this key combination can be used to close any window or dialog. When used to close a
dialog, it has the same effect as pressing the Esc key or pressing Cancel with the mouse.

In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.

Operating the Console

61

The Test Window


A Test window lets you work with the endpoint pairs in a test. You can add new endpoint pairs, copy, or
delete existing endpoint pairs, edit endpoint pairs, or view the results of a test youve run.
Access the Test window by pressing the New button on the Main window or by opening an existing test.
The endpoint pairs in a test are shown as rows in this window. The pairs are identified by their pair number,
shown on the left-hand side of each row. You can edit a pair by double-clicking on it, or by selecting it and
using the corresponding menu item or toolbar icon.
A Test window is partitioned into areas accessible by tabs. The test setup is the first tab; other tabs let you view
the results of a test. You can save a test to a file, or export to a variety of file formats. When youve set up the
endpoint pairs you want to test, youre ready to run. The Set run options menu item lets you change how the
test runs.
The toolbar icons offer shortcuts to commonly-used operations. Press the leftmost icon to Save the current test
to a file. Press the Run or Stop icon to start or stop a test; press the Poll icon to get the latest count of timing
records. The next four icons handle the clipboard operations: Cut, Copy, Paste, and Delete. These are
followed by four icons for working with endpoint pairs: Add, Edit, Replicate, and Renumber. Six icons
provide the most common grouping for endpoints; the two icons to their right let you Expand or Collapse all
groups. Lastly, pressing the Help icon gives assistance for the Test window.
Right-click the mouse button to bring up a floating Edit menu, when youre pointing to any endpoint pair in a
Test window. When youre in the graph region, right-click to bring up the set of graph functions.
Menu items and toolbar icons:

The File Menu (Test Window)to save, export, print, or clear results
The Edit Menu (Test Window)to add, change, copy, or delete pairs
The View Menuto change how pairs are grouped and shown
Running a Testto control the running of the test
Helpto get more information

Tabs:

Test Setup Tabchange pairs in a test


Throughput Tabshows throughput results
Transaction Rate Tabshows transaction rate results
Response Time Tabshows response time results
Lost Data Tabshows lost data results
Jitter Tabshows jitter data results
Raw Data Totals Tabshows byte counts results
Endpoint Configuration Tabshows details about each endpoint results
Datagram Tabshows datagram details (for IPX, RTP and UDP) results

The Test Setup tab is always available; the last seven tabs are only available when a test has results. Pressing
the Throughput, Transaction Rate, Response Time, Lost Data, or Jitter tab causes the appropriate graph to be
displayed. Choosing one of the other three tabs causes the pair information in the top portion of the Test
window to change; the graph display at the bottom remains the same.

62

Chariot User Guide, version 3.1

The Status Bar (Test Window)


The status bar, at the bottom of the Test window, shows summary information about the progress of a test as it
moves from initialization to completion. The status bar can include up to five fields, depending on the tests
current state.
Status Bar Fields

Number of pairs used in the current test.

Start status (shown when the test is initializing and running) gives the overall progress of the test. When
the test ends, this field displays the start date and time of the current test.

End status (only displayed while the test is running) displays the elapsed time (hr:min:sec). When the test
ends, this field shows the ending date and time.

Duration (displayed while the test is running) estimates the time remaining until the test will complete
(hr:min:sec). This shows no value until enough timing records are received to calculate an estimate.
When the test ends, this field shows the actual total run time.
You might see the estimated remaining time increase in large increments, if one or more pairs are using
random sleep durations.

Completion status (only displayed when the test ends) displays whether the test: ran to completion; was
stopped by the user; or stopped because an error was detected.

The File Menu (Test Window)


The setup for a test and (optionally) one set of results are stored together in a binary file. Chariots default is to
use the file extension of .TST for this file. For each endpoint pair in the test, Chariot stores the names of the
endpoints, their protocol and service quality, and the full script and script variables used by that pair. If results
are being saved in the file as well, Chariot saves all the timing records associated with the most recent test run
in the binary file.
Select the Save menu item to save a new test or to save a file using its same filespec; use Save as to save with a
new or different name. If you are saving a test that contains a pair using the RTP protocol and you save in 2.x
format, the file is saved, but the pair is not included in the saved test. If the test only contains pairs containing
functions not supported in the Chariot version you are saving the test in, you cannot save the test in the
specified level.
The title bar for the Test window shows the filename of the current test file. It says Untitled (n) if this is a
new test that hasnt yet been saved (the n is the number of this Test window).
The Clear results menu item lets you erase the results from a test, without affecting the test setup. Only the
Test Setup tab remains in the Test window after performing a Clear. You might use this to save just the setup
for a test, without a large set of accompanying results that you dont plan to keep anyway.
Use the Close menu item to exit from the current Test window. If a test is running in the window, you are
asked if you want to stop it. If the test hasnt been saved to file, you are asked if you want to save it.

Print and Export Options


Select the Print menu item to print any aspect of the test. You can print just the test setup, the test and its
results, or even the graphs associated with the results.

Operating the Console

63

Select the Export menu item to export any aspect of the test. You select which format you want to export the
test and which aspects of the test you want to export.
You can export formatted results to

an ASCII text file,


a Web page file, in HTML format,
a comma separated values, in CSV format or
a spreadsheet file, in WK3 format.

We have added support of the CSV file format in this release to allow you to export data to a comma-separated
value format that can be read into most spreadsheet applications. See Export Options for CSV file on page
65 for more information on the CSV file format.
The WK3 export format will not be supported in the next release of Chariot. In this release, the WK3 file
format does not support the new features added for Chariot 3.1, such as jitter data and CPU Utilization data.
This information is not included in tests exported to this format.
See the File Types and How They are Handled section on page 108 for information on how these files are
handled.
The Print Options (or Export Options) dialog lets you decide how much of the current test you want to print or
export. Remember that tests with lots of pairs and lots of timing records can take lots of paper!
Before choosing print or export options, make sure that the pairs of interest are expanded in the Test window.
If the group you want to print or export from is collapsed, double click on the group to expand it, so that the
pairs are shown. If a group in the Test Window is collapsed, its pairs are not detailed when you export or
print.
Having chosen the pairs you want to include in your report, you can next choose how much detail you want to
see. Because the CSV file format is read by a computer software program, content of all CSV files must look
the same. Therefore, you are limited in the export options regarding the content exported. See Export
Options for CSV file on page 65 for information on the CSV file format and the Export to CSV File dialog.
Chariot lets you create output templates for your print/export options. You can then select a template when you
are printing or exporting without having to reselect the options each time. The first time you access the
Print/Export dialog for each test during a Chariot session, the Output Template field defaults to the output
template selected on the Output tab of the User Settings notebook. See Changing Your Output Defaults in
The Main Window section for more information on selecting default output templates.
To use an output template you have previously created or to modify an output template, select the output
template from the Output Template field. To create a new output template from this dialog, enter the name of
the new output template in this field. The special characters *,\, and ? are not allowed in an output template
name.
You can also create new templates, modify existing templates, delete templates, and create a copy of a template
by selecting the Output Templates menu item from the Tools menu on the Main window. See Working with
Output Templates in The Main Window section for more information.

64

Chariot User Guide, version 3.1

Summary report
Show just the test setup and a summary of any results. You can also choose to see the information shown
in each of the tabs and graphs.

Check Result Tables to see the information summarized in the Throughput, Transaction Rate,
Response Time, Lost Data, and Datagram tabs.

Check Result Graphs to output the corresponding graphs. When exporting, GIF files are created.
This option is only available for HTML Export.

Complete report
Show everythingall the setup information, all the scripts, all the results analysis, and all the individual
timing records. Not recommended unless you have small tests or lots of paper!
Custom
This button offers you a dialog box that lets you choose precisely what to show in your report or display the
selections from an output template you have created. See Custom Print and Export Options for more
information on this dialog.
As another simplification you may also want to print information about a limited number of pairs. To do this,
use the 'mark' column in the Test window to choose only the relevant pair or pairs. Click on the Print icon
and go to the scope group in the middle of the dialog. Select the Print marked groups and pairs radio
button. This limits the output to only marked groups or pairs. Next, press the custom radio button, then
deselect everything except the parameters that are of interest to you.
If you want to save the options you have selected on this dialog with the test, select the Save options with test
checkbox. The next time that you access the Print dialog for this test, the options you have selected will be
filled in. If you created a new output template or modified an existing output template, press the Save
Template button. This saves your print options to the selected output template.
At the bottom of the dialog, Chariot shows the approximate number of pages to be printed based options you
have selected.
When setting options for printing, you can press the Select printer button to choose among the printers
defined at your computer. From the Properties button, you might also choose landscape or portrait printing
which may offer you a more readable view of your results. If you have long addresses or comments, we
recommend landscape printing to get the extra width.

Custom Print and Export Options


On the Custom dialog, you specify the summary and detailed test information in your report. To access the
Custom dialog, press the Select button on the Print Options dialog.
Summary Information defaults to all options selected (indicated by a checkbox). Remove the checkmark from
those Summary information options that you do not want to include in your report.

Run optionsprovides a summary of your run options

Test setupprovides information contained in the Test Setup tab of the Test window

Throughput resultsproduces results displayed in the Throughput tab of the Test window

Throughput graphoutputs the graph which depicts the throughput results

Transaction rate resultsproduces the results displayed in the Transaction Rate tab of the Test window

Transaction rate graphproduces the graph which depicts the transaction rate results

Operating the Console

Response time resultsproduces results displayed in the Response Time tab of the Test window

Response time graphproduces the graph which depicts the response time results

Lost data resultsproduces results displayed in the Lost Data tab of the Test window

Lost data graphproduces the graph which depicts lost data results

Jitter resultsproduces results displayed in the Jitter tab of the Test Window

Jitter graphproduces the graph which depicts the jitter results

Datagram resultsreports results displayed in the Datagram tab of the Test window

Raw data totalsinformation displayed in the Raw Data Totals tab of the Test window

Endpoint configurationinformation about your tests endpoint(s)

65

When you choose to export to HTML, the graphs are output as separate files in GIF format. A GIF file can be
imported into your favorite word processor or graphics program, as well as being linked to the Web page
created by the export. See the File Types and How They Are Handled section on page 108 to understand the
filenames used for the GIF files.
None of the options in the Detailed Information default to selected. Click on the Detailed information
options that you wish to include in your report.

Scriptsthe script commands and variables used for your test are reported

Endpoint configuration detailsdetailed information about the endpoints in your test are reported. You can
see this information by choosing the Endpoint configuration menu item from the View menu.

Timing recordsthe individual timing records for the pairs in your test are reported. There can be a lot of
thesetens of thousandsso we recommend getting an idea of the number by selecting the Raw Data Totals
tab.

All of these options can be chosen by pressing the Select all button. A checkmark appears beside each of the
options, indicating that it is selected. You can uncheck all the options by pressing the Deselect all button.
One or more of these Print or Export options may appear grayed, indicating that the option(s) is not available
to be reported on for this particular test. An option is enabled only when information of that type exists for the
test.
At the bottom of the dialog, Chariot shows the approximate number of pages to be printed (when setting
options for printing).

Export Options for CSV file


Chariot can export files in the widely-supported file format called CSV (for comma-separated values). In
CSV format, values are separated from one another by commas, with the addition of double-quotes when
needed. Popular spreadsheet programs, such as Microsoft Excel and Lotus 1-2-3, can open and save files in
CSV format, making the CSV file format a good tool for coordinating the information you want to export from
Chariot.

66

Chariot User Guide, version 3.1

There are two special characters in CSV format which need special treatment: commas and double-quotes.

If a comma is in a string, surround the whole string with double quotes. For example, the comment field
below contains two commas:
"These commas, here, are part of the comment."

Surround each double quote in a string with its own pair of double quotes. For example, the comment
field below contains two sets of double quotes:
This endpoint is used in the """R&D1""" department.

The Export to CSV dialog is shown when you select the Export to CSV menu item from the File menu. You
can select the type of content you want exported from the test. Remove the checkmark from those options that
you do not want included in your report.

Test summary and run optionsprovides a summary of any results and your run options

Pair summaryprovides information contained in the Test Setup tab of the Test Window

Pair detailsprovides the timing records for the pairs in your test

You can choose to report on all the pairs (Export all) or you can choose to report on only specific pairs or
groups of pairs (Export marked pairs). This second radio button lets you choose the pairs that have the
mark symbol next to them, in the first column on the left-hand side of the Test Window.

Printer Options
Select the printer for your output, from among the printers currently defined on your computer by pressing the
Select printer button. To change the characteristics of that printer, choose the Properties or Job properties
button.

The Edit Menu (Test Window)


The menu items in the Edit menu let you change the setup for a test. In addition to editing, deleting, and
copying individual endpoint pairs, you can select multiple pairs and perform these same operations across all of
them. You can also cut and paste pairs among different Test windows.
Right-click the mouse button to bring up a floating Edit menu, when youre pointing to any endpoint pair in a
Test window.
Cut
You can cut an existing pair or group of pairs from the Test window as follows. First, select the pair(s) to be
cut by clicking on the individual pair. Once selected, the pair is highlighted. When cutting multiple pairs,
hold down the Shift key and click on the first and last pairs to be cut. The two pairs on which you clicked, as
well as all pairs in between, are highlighted. Once selected, you can cut pairs in one of three ways:
1.

Choose Cut under the Edit menu

2.

Press Ctrl+X

3.

Use the Cut icon (scissors) on the toolbar

Operating the Console

67

If the Test window contains test results, you are prompted for confirmation of the cut, since a cut operation
clears all test results. At this point, you can cancel the cut operation or proceed with the cut. After the cut
operation is successfully executed, the Paste menu item under the Edit menu and the Paste icon on the toolbar
are enabled.
Copy
You can copy an existing pair or group of pairs from the Test window as follows. First, select the pair(s) to be
cut by clicking on the individual pair. Once selected, the pair is highlighted. When copying multiple pairs,
hold down the Shift key and click on the first and last pairs to be copied. The two pairs on which you clicked
as well as all pairs in between are highlighted. Once selected, you can copy pairs in one of three ways:
1.

Choose Copy under the Edit menu

2.

Press Ctrl+C

3.

Use the Copy icon (two paper sheets) on the toolbar

After the copy operation is successfully executed, the Paste menu item under the Edit menu and the Paste icon
on the toolbar are enabled. Copy does not cause test results to be cleared.
Paste
You can paste data that has been cut or copied within Chariot. Data that has been put on the clipboard by
another application results in a disabled Paste menu item under the Edit menu and a disabled Paste icon (two
paper sheets) on the toolbar. However, it is possible to paste data cut or copied by Chariot into other
applications that accept tab-delimited data, such as spreadsheets and editors.
To perform a paste operation, you must have successfully completed a cut or copy operation. A paste operation
can be performed in one of three ways:
1.

Choose Paste under the Edit menu

2.

Press Ctrl+V

3.

Use the Paste icon (a clipboard) on the toolbar

Before pasting the clipboard onto the selected Test window, Chariot ensures that the paste operation does not
exceed the licensed number of endpoint pairs. If this number will be exceeded by the current paste operation,
you are told by a dialog and the paste is aborted; otherwise the paste continues.
If the Test window contains test results, you are prompted for confirmation of the paste, since a paste operation
clears all test results. At this point, you can cancel the paste operation or proceed.
Delete
You can remove an existing pair or set of pairs from the Test window. First, select the pairs(s) to be deleted,
by clicking on the row or rows. When selected, the rows are highlighted. When deleting multiple pairs, hold
down the Shift key and click on the first and last rows to be removed. The two rows on which you clicked, as
well as every row in between, are highlighted. Then, delete the highlighted pairs in one of the following three
ways:
1.

Choose Delete under the Edit menu

2.

Press Ctrl+D

3.

Click on the Delete icon in the toolbar.

68

Chariot User Guide, version 3.1

A warning box appears asking, Are you sure you want to delete the selected endpoint pair(s)? Press the Yes
button to continue deleting or press the No button to cancel the delete. Upon pressing the Yes button, the
highlighted pair(s) is removed from the Test window.
Select all pairs
To select all of the pairs in the Test window, go to the Edit menu and choose Select all pairsor press Ctrl+A.
All of the pairs in the Test window are highlighted, to indicate that they are selected.
Deselect all pairs
To deselect all of the highlighted pairs in the Test window, go to the Edit menu and select Deselect all pairs.
All pairs which were previously highlighted are no longer selected.
Mark selected items
You can mark any pair or group shown in the Test window. Mark a pair or group when you specifically want
to include it in a graph or in a printed report. New pairs are initially marked when theyre created.
Choose this menu item to mark all the pairs and groups that are currently shown as selected (highlighted) in
the Test window. The fact that the pair or group is marked is displayed on the left side of the Test window.
Unmark selected items
In contrast to marking pairs or groups (as discussed above), you can unmark any pair or group shown in the
Test window. Unmark a pair or group when you specifically dont want to include it in a graph or printed
report.
Choose this menu item to unmark all the pairs and groups that are currently shown as selected (highlighted)
in the Test window. The fact that the pair or group is no longer marked is displayed on the left side of the Test
window.
Edit
You can modify an existing pair or multicast group in the Test window. First, select the pairs(s) or multicast
group to be edited by clicking on the row or rows. When selected, the rows are highlighted. Then, edit the
pair(s) in one of the following three ways:
1.

Go to the Edit menu and select Edit

2.

Press Ctrl+E

3.

Click on the Edit icon in the toolbar.

If you highlighted a single pair, the Edit an Endpoint Pair dialog opens to let you modify the pair(s). See
Adding or Editing an Endpoint Pair on page 70 for more information on modifying a pair. If you
highlighted a single multicast group or a pair in a multicast group, the Edit a Multicast group dialog opens to
let you modify the multicast group.
If you highlighted pairs, the Edit Multiple Endpoint Pairs dialog opens, enabling you to modify the definition
of many pairs simultaneously. You cannot edit multiple multicast groups at the same time. If you highlight
multiple multicast groups or a combination of multicast pairs and single pairs, the Edit command is not
available.
Note that the Edit feature, in combination with the Replicate feature, is a quicker method of adding new pairs,
compared to the Add pair and the Add Multicast group menu items. To add a new, unique row or group of
rows, select the Replicate command. Then, select the Edit command to open the Edit an Endpoint Pair, Edit
Multiple Endpoint Pairs, or Edit a Multicast Group dialog. Inside these dialogs, you can modify the definition
of a pair or set of pairs to contain the specific test information you want.

Operating the Console

69

Edit Console to Endpoint 1


To use a different network protocol or service quality between the console and Endpoint 1, select the Edit
Console to Endpoint 1 menu item. See Changing Console to Endpoint 1 Values on page 71 for more
information.
Replicate
You can duplicate an existing pair, existing multicast group or group of pairs in the Test window. First, select
the pairs(s) or group(s) to be copied; when selected, the rows are highlighted. Then, replicate the pair(s) or
group(s) in one of the following two ways:
1.

Go to the Edit menu and select Replicate

2.

Click on the Replicate icon in the toolbar.

If you highlight pair(s) and multicast group(s), the Replicate menu item is not available. You must replicate
pairs and multicast groups separately. Upon performing one of these steps, the Replicate Selected Pairs or
Replicate a Multicast Group dialog opens. See Replicating Pairs in a Test or Replicating a Multicast Group
in a Test for more information.
Add pair
You can add a new pair to the Test window in three ways:
1.

Go to the Edit menu and select Add pair

2.

Press Ctrl+P

3.

Click on the Add pair icon in the toolbar.

Upon performing one of these steps, the Add an Endpoint Pair dialog appears. See Adding or Editing an
Endpoint Pair for more information on adding a pair.
Add Multicast group
You can add a new multicast group to the Test Window in three ways:
1.

Go to the Edit menu and select Add Multicast group

2.

Press Ctrl+G

3.

Click on the Add multicast group icon in the toolbar.

Upon performing one of these steps, the Add a Multicast group dialog box appears. See Adding or Editing a
Multicast Group for more information on adding a multicast group.
Renumber all pairs
After having edited or deleted pairs within a Test window, the numbering in the Group column may no longer
be sequentialpairs arent automatically renumbered. Your list of pairs may start with a number other than 1,
may have numbers from the sequence missing, or may be numbered out of sequence. To correct these
problems, invoke the Renumber all pairs menu item in one of the following ways:
1.

Go to the Edit menu and select Renumber all pairs

2.

Press Ctrl+N

3.

Click on the Renumber pairs icon in the toolbar.

The pairs in the Test Window are renumbered from 1 in sequential order without missing numbers.

70

Chariot User Guide, version 3.1

Adding or Editing an Endpoint Pair


An endpoint pair needs the two network addresses of the computers, the protocol to use between them, and the
script they should run. You can optionally choose to use a different type of network protocol for the connection
between the console and Endpoint 1.
In the Endpoint 1 network address and Endpoint 2 network address fields, enter the from and to
addresses, respectively, for this pair.

For APPC, enter fully-qualified LU names or LU aliases in these fields. Examples of fully-qualified LU
names are GANYMEDE.JOHNQ or USGOVNSA.I594173.
When using LU aliases, ensure they are defined correctly in the APPC configuration on the endpoint
computers themselves, not at the console. LU aliases are case-sensitive, so be sure to use the correct
capitalization.

For IPX or SPX, enter an IPX address in hexadecimal format or enter its alias. An example of an IPX
address is hex format is 03F2E410:0A024F32ED02. The first 8 digits (4 bytes) are the network number;
the 12 digits (6 bytes) following the colon are the node ID (you may also hear these referred to as the
network address and node address).
But, no one wants to enter hex numbers like that more than once. Chariot lets you have your own aliases
for these addresses, which are much easier to type and remember.

For RTP, TCP or UDP, enter an IP address, in domain name or numeric format. An example of domain
name format is www.Ganymede.com, while an example of numeric format is 199.72.46.202.

As you enter the network addresses of endpoints, Chariot remembers them for you, making it easier to set up a
test the next time. The names are saved in a file named ENDPOINT.DAT, which you can edit with an ASCII
text editor. This lets you add, modify, or delete entries in your list of network addresses.
Be sure that you do not enter an IP Multicast address in the Endpoint 1 network address and Endpoint 2
network address fields. If you enter an IP Multicast address in these fields, the test fails with error CHR0209.
Select one of the available communication stacks from the Network protocol pulldown. When this test is run,
that protocol must be correctly configured and started on the endpoint computers. If you selected a Streaming
script, you must select either IPX, RTP, or UDP. If a service quality is required by the network protocol, enter
or select a value defined on the endpoint computers. The service quality values you enter are remembered in
file SERVQUAL.DAT.

For RTP, TCP or UDP, see Working with Quality of Service (QoS) Templates on page 57 in The Main
Window section for information on setting up QoS templates.

For APPC, see Selecting a Service Quality (APPC Mode Name) on page 24 in the Configuring Chariot
in Your Network chapter for information on APPC mode names.

You can choose to edit the existing script associated with an endpoint pair, or to open a new script. If this is a
new pair, your only choice is to associate a script with this pair, by selecting the Open a script file button.
After you have associated a script with an endpoint pair, you can edit the script by selecting the Edit this
script button. See the Messages and Application Scripts manual for information on scripts.
We recommend always entering a descriptive Pair comment. This lets you identify each pair in the Test
window easily.

Operating the Console

71

Adding or Editing a Multicast Group


To create a test emulating a multicast application, you must first set up a multicast group. Multicast group
members are Endpoint 2 computers designated as receivers of the data. From the Edit menu, select Add
Multicast group menu item. The Add a Multicast Group dialog box is shown.
To edit an existing multicast group, select the multicast group from the Test Window. From the Edit menu,
select the Edit menu item. The Edit a Multicast Group dialog is shown.
In the Group name field, enter a Group name that is unique within the test. If you do not enter a group name
in this field, Chariot creates a group name that is a combination of the IP address and port of the multicast
group.
We recommend always entering a descriptive Group comment. This lets you easily identify each multicast
group in the Test window.
In the Multicast address field, enter or select a class D IP address. Class D addresses are in the range
224.0.0.0 to 239.255.255.255. Chariot verifies that the IP Multicast address you enter is within the required
range. Chariot does not verify that you do not enter a reserved class D address. See Emulating IP Multicast
Applications on page 43 in the Working with Datagram and Multimedia Support chapter for information on
reserved multicast addresses and the implications of using these addresses.
Enter or select the port number for the multicast group in the Multicast port field. The Multicast port must
uniquely identify the Multicast group. You can enter values in the range from 1 to 65535. You should avoid
using well-known port numbers.
In the Endpoint 1 network address field, enter the from address for this multicast group. Enter an IP
address, in domain name or numeric format. An example of numeric format is 199.72.46.202. The Endpoint
1 acts as the multicast server.
You can add and delete multicast group members. To add a multicast group member, enter or select the
endpoint in the Multicast Group Members field and select the Add button. The multicast group member is
displayed in the listbox underneath the Multicast Group Members field. To delete a multicast group member,
select the group member from the listbox and select the Delete button. The group member is removed from the
listbox.
IP Multicast runs only over RTP or UDP. Select one of these protocols in the Network Protocol field. You
can supply a UDP or RTP QoS template in the Service Quality field, if QoS is supported on the associated
endpoints.
You can choose to edit the existing script associated with the multicast group, or open a new script. If this is a
new group, your only choice is to associate a streaming script with this group, by selecting the Open a script
file button. See the After you have associated a script with a multicast group, you can edit the script by
selecting the Edit this script button. See the Messages and Application Scripts manual for information on
streaming scripts.

Changing Console to Endpoint 1 Values


If you want to use a different network protocol or service quality between the console and Endpoint 1, select the
Edit console to Endpoint 1 menu item from the Edit menu. The Edit Console to Endpoint 1 dialog is shown.
Uncheck the Use Endpoint 1 to Endpoint 2 values button. For example, you might use TCP to connect from
the console to Endpoint 1, yet run an APPC test between Endpoint 1 and Endpoint 2. As another example (if

72

Chariot User Guide, version 3.1

APPC is supported at the console), you might use the #INTER mode between Endpoint 1 and Endpoint 2 (for
speediest performance) and use the #BATCH mode between Endpoint 1 and the console (for less-disruptive
delivery of results).
Endpoint computers can have multiple network addresses. For example, it is common for a computer with
multiple adapters to have multiple IP addresses. You can make use of both addresses at Endpoint 1. Specify
one of the addresses for the connection between Endpoint 1 and Endpoint 2. Specify the other address as the
target for the connection from the console, in the field named How does the Console know Endpoint 1?.
If using APPC, enter a mode name in the Service Quality field. However, if using TCP, a QoS cannot be used
on the connection between the console and Endpoint 1.

Replicating Pairs in a Test


Highlight the pairs you want to replicate and select Replicate from the Edit menu. If you highlight pair(s) and
multicast group(s), the Replicate menu item is not available. You must replicate pairs and multicast groups
separately.
Specify the number of copies you wish to make of the highlighted pairs. In the window, the Replication Count
field defaults to a value of 1, which indicates that one copy will be made of each of the highlighted pair(s). To
specify a number greater than one, either enter the desired number of copies in the Replication Count field or
use the up and down arrow keys to set the Replication Count field to the desired number. You are limited to
the total number of pairs allowed by your Chariot license.
After you have specified the number, press the OK button. Choosing OK adds replicated pair(s) to the end of
the Test window. If you decide not to replicate the selected pair(s), press the Cancel button.

Replicating a Multicast Group in a Test


Highlight the multicast group you want to replicate and select Replicate from the Edit menu to get the
Replicate a Multicast Group dialog. Only a single multicast group can be replicated at a time. If you highlight
a combination of endpoint pairs and multicast groups, the Replicate menu item is not available. You must
replicate pairs and a multicast group separately.
The dialog shows the information for the multicast group that you highlighted, except for the Group Name,
Group Comment, and Multicast port fields. These fields display blank because they must be unique. Enter
the information and modify any other fields that you want to change. See Adding or Editing a Multicast
Group on page 71 for information on the fields on this dialog.

The View Menu


The View menu items let you alter the way in which information is displayed in the Test window and the
Comparison window.
Sort
You can reorganize the order in which pairs are displayed in the window according to a series of different
criteria. To sort the pairs, go to the View menu and select Sort. In the Sort dialog, you can define the criteria
by which the pairs are sorted within their groups. See Changing the Way Pairs are Shown for information
on selecting sort criteria in the Sort window.

Operating the Console

73

Group sort order


After grouping pairs together (see Group by, below), you can sort the groups of pairs in either ascending or
descending order. To sort groups by group name, choose Group sort order from the View menu. To the right
of the menu item, select whether you want to sort the groups in ascending or descending order. The groups in
the Test window are reordered according to your selection.
Group by
You can group pairs in the window in two ways:
1.

Go to the View menu and select Group by

2.

Press one of the seven Group by icons in the toolbar.

When grouping the pairs using the Group by menu item under the View menu, a list appears to the right of
the menu item. Choose the criteria by which the pairs should be grouped. The pairs in the window are then
displayed in groups. To group pairs using the toolbar icons, click on the applicable icon:
ALL

No grouping

TCP

group network protocol

SCR

group by script name

EP1

group by endpoint 1 address

EP2

group by endpoint 2 address

SQ

group by service quality

PG

group by named groups of pair

Upon grouping pairs, the groups are listed in ascending order. You can rearrange them in descending order;
use the Group sort order (described above) to change the display of groups in ascending or descending order.
Information
The window is divided into eight tabbed areas. If you are in the Test window and a test is new, only the Test
Setup tab is shown. While a test is running, or after a run, all eight tabs can be viewed. Only tabs relevant to
the results are shown. For example, if the test does not contain RTP pairs, the Jitter tab is not shown.

Test Setup Tabchange pairs in a test


Throughput Tabshows throughput results
Transaction Rate Tabshows transaction rate results
Response Time Tabshows response time results
Lost Data Tabshows lost data results
Jitter Tabshows jitter data results
Raw Data Totals Tabshows byte counts results
Endpoint Configuration Tabshows details about each endpoint results
Datagram Tabshows datagram details (for IPX, RTP, and UDP) results

Each of these eight areas is identified by a labeled tab on which you may click to open the window. Rather
than clicking on tabs to move between areas, you can choose Information under the View menu.
Note that the Information menu item in the Test window is grayed until after a test has been run.

74

Chariot User Guide, version 3.1

Expand all tests and groups


When information is summarized in the window by group, you can expand the groups to view details for the
pairs. Groups can be expanded in two ways:
1.

Choose Expand all tests and groups under the View menu

2.

Click on the Expand groups icon.

The test setup and results are displayed for all of the pairs.
Collapse all tests and groups
Groups of pairs can be collapsed to decrease the level of detail displayed in the Test window. Groups can be
collapsed in two ways:
1.

Go to the View menu and select the Collapse all groups item

2.

Click on the Collapse groups icon

The information for the pairs is summarized and displayed by groups. Individual groups may be expanded or
collapsed by double-clicking on the group.
Throughput units
Your throughput results can be shown in different units of measurement. To change the unit of measure by
which your throughput results are shown, choose the Throughput units item from the View menu. In the
Throughput Units dialog box, which opens, select the unit of measurement you wish to use.
Show error message
If theres an error associated with the selected endpoint pair, choosing this menu item causes Chariot to show
the Error Message dialog for the message.
Show timing records
Choosing this menu item shows the individual timing records associated with the results for this pair.
Show endpoint configuration
Choosing this menu item shows extensive details about the endpoint programs, and the operating systems and
protocol stacks they are using. This information differs among operating systems and endpoint versions. See
Endpoint Configuration Details for an example of whats shown.

Changing the Way Pairs are Shown


You can reorganize the way in which pairs are shown in the Test window, according to a series of different
criteria. Selecting Sort from the View menu opens the Sort dialog. In the Sort dialog, you specify the first,
second, and third criteria by which the pairs will be sorted.
In the Sort By box, click on the down arrow for a list of available criteria in the primary sort. Then, specify
whether you would like this sort to be performed in ascending or descending order by clicking on the
applicable button in the Sort By box. You can complete the sort definition at this point by pressing the OK
button. Or, you may continue to add a second and, optionally, a third sort level.
To add the second and third sort levels, click the down arrow in the Then By box and select a different sort
value from the list of available criteria. Then, specify whether you want this sort to be performed in ascending
or descending order, by clicking on the applicable button in the Then By box. To add a third sort level, repeat
these steps in the second Then By box.

Oper ati ng the Console

75

When you have defined your sort, press the OK button to reort the Test window according to your
specifications.
To change your mind and avoid sorting, press the Cancel button.

Graph Configuration
A graph is always shown at the bottom of the Test window when results are available. The Graph
Configuration dialog lets you choose what type of graph is shown.
Choose among line-, bar-, or pie-type graphs. Histograms are shown as bar graphs. The type of graph applies
whether youre viewing throughput, transaction rate, response time, lost data, or jitter.
Press the Pairs or Groups button to decide how the results are aggregated in your graph. The graph youve
chosen shows either the pairs youve marked (giving you more detail) or the groups youve marked (giving you
a way to combine many pairs).
You can also choose a legend, which shows the color and patterns used for each pair or group. Additionally,
you can choose whether to see a gridthin lines which help you visualize your data better.
Press the Axis Details button for further control over how the graph is shown.
Press the Apply button to see the immediate effect of your choices. If you like what you see, press OK. If you
want to reset your changes and try again, press Undo. Press Cancel to close the dialog box without
remembering the changes youve applied.

Axis Details, Line Graph


The line graph depicts elapsed time across the horizontal axis and the Throughput, Transaction Rate, Response
Time, Lost Data, or Jitter along the vertical axis. The range of both of these axes may be modified to display
information according to your needs.
In the Axis Details dialog box, you define the time frame displayed on the line graph during test running, the
time frame shown in the line graph after the test has run, and the range for the line graphs.
During the running of a test, the line graph depicts a specified time frame in which activity has occurred. For
example, the graph may show the last 10 seconds or the last 120 seconds of testing results. The time frame of
activity displayed during test running defaults to the last 60 seconds of test running. Change the While
running field from 60 seconds to a smaller or larger time frame, if desired.
After the test has run, the line graph provides results for a particular time frame. In the Axis Details dialog
box, the time frame defaults to contain automatic minimum and maximum ends which cover the total length of
time that the test ran. You may increase or decrease the Elapsed time displayed in the line graph by specifying
a minimum and or maximum time frame.
To change the minimum end of the Elapsed time frame, click in the Minimum Auto box to remove the
checkmark. Upon clearing the box, the Minimum line appears in black type, indicating that it is enabled.
Enter a value for the hours, minutes and/or seconds at which time you wish the graph to begin displaying
results. For example, if you type a value of 1 in the Min field, the Elapsed time in the line graph begins at one
minute. All test results before one minute are not displayed in the line graph.

76

Chariot User Guide, version 3.1

Repeat the steps above on the Maximum line to change the maximum end of the Elapsed time range. The
value that you type on the Maximum line determines the end of the Elapsed time range shown on the line
graph. For example, if you enter 3 in the Min field, the Elapsed time in the line graph ends at 3 minutes. Test
results after three minutes are not displayed in the line graph.
The vertical axis of the line graph depicts the data for the tab selected on the Test window. In the Axis Details
dialog box, you may specify the results range displayed on the vertical axis. The results range defaults to
contain minimum and maximum ends which cover the total range of Throughput, Transaction Rate or
Response Time. You may decrease or increase the range of the vertical axis by changing the minimum and/or
maximum range ends.
Click on the tab for the type of line graph you wish to modify. To change the lowest value on the vertical axis,
click in the Minimum Auto box to remove the checkmark. Upon clearing the box, the Minimum line appears
in black type, indicating that it is enabled. Enter a value in the Minimum field which determines the lowest
value on the vertical axis (such as the lowest #/second for Transaction Rate). For example, if you enter 1600 in
the Transaction Rates Minimum field, the vertical axis begins at 1,600/second. Transaction Rate results
below 1,600/second are not displayed in the line graph.
Repeat the steps above on the Maximum line to change the maximum end of the results range. The value that
you type on the Maximum line determines the highest value on the vertical axis. For example, if you enter
3000 in the Transaction Rates Maximum field, the highest point on the vertical axis is 3,000/second.
Transaction Rate results above 3,000/second are not displayed in the line graph.

Axis Details, Bar Graph


The bar graph depicts Throughput, Transaction Rate, Response Time, Lost Data, or Jitter for each pair/group
in a test. The bar graph displays each pair/group across the horizontal axis and the Throughput, Transaction
Rate, or Response Time along the vertical axis. In the Axis Details dialog box, you may modify the range of
the bar graphs vertical axis to display information according to your needs.
In the Axis Details dialog box, the range for the vertical axis defaults to contain minimum and maximum ends
which cover the total range for the results. You may decrease or increase the range of the vertical axis by
changing the minimum and/or maximum range ends.
Click on the tab for the type of bar graph you wish to modify. To change the lowest value on the vertical axis,
click in the Minimum Auto box to remove the checkmark. Upon clearing the box, the Minimum line appears
in black type, indicating that it is enabled. Enter a value in the Minimum field which determines the lowest
value on the vertical axis (such as the lowest #/second for Transaction Rate). For example, if you type a value
of 300 in the Transaction Rates Minimum field, the vertical axis begins at 300/second. All Transaction Rate
results below 300/second will not be displayed in the bar graph.
Repeat the steps above on the Maximum line to change the maximum end of the results range. The value that
you type on the Maximum line determines the highest value on the vertical axis. For example, if you enter
2000 in the Transaction Rates Maximum field, the highest point on the vertical axis is 2,000/second.
Transaction Rate results above 2,000/second are not displayed in the bar graph.
If you are using 640 x 480 screen resolution (VGA mode), Chariot only shows the graphs for 89 pairs in a test.

Operating the Console

77

Axis Details, Histogram


The histograms represent the frequency distribution of Throughput, Transaction Rates, Lost Data, Jitter, or
Response Times. The two histograms display Throughput, Transaction Rate, Lost Data, Jitter, and Response
Time across the horizontal axis and the Percent of Total along the vertical axis. In the Axis Details dialog box,
you may modify the range of the horizontal axis to display information according to your needs.
In the Axis Details dialog box, the range for the horizontal axis defaults to contain minimum and maximum
ends which cover the total range of Throughput, Transaction Rate, Lost Data, Jitter, or Response Time for the
test results. You may decrease or increase the range of the horizontal axis by changing the minimum and/or
maximum range ends. In addition, you may change the number of divisions by which the frequencies are
displayed.
Click on the tab for the type of histogram you wish to modify. To change the lowest value on the horizontal
axis, click in the Minimum Auto box to remove the checkmark. Upon clearing the box, the Minimum line
appears in black type, indicating that it is enabled. Enter a value in the Minimum field which determines the
lowest value on the horizontal axis (such as the lowest #/second for Transaction Rate). For example, if you
type a value of 300 in the Transaction Rates Minimum field, the horizontal axis begins at 300/second. All
Transaction Rate results below 300/second will not be displayed in the histogram.
Repeat the steps above on the Maximum line to change the maximum end of the Throughput, Transaction
Rate, Lost Data, Jitter, or Response Time range. The value that you type on the Maximum line determines the
highest value on the horizontal axis. For example, if you enter 2000 in the Transaction Rates Maximum field,
the highest value on the horizontal axis is 2,000/second. Transaction Rate results above 2,000/second are not
displayed in the histogram.
You may choose to see as few as one frequency or as many as 100 frequencies across the x-axis. The Number
of divisions field in the Histogram Axis Details dialog box defaults to 20 divisions. Increase the number of
divisions to display more detail in the histogram; decrease the number of divisions to display more summary
information in the histogram.

The Run Menu


The Run menu items let you start and stop tests, as well as define how tests are run.
Run
You can run a test in three ways:
1.

Go to the Run menu and select Run

2.

Press Ctrl+R

3.

Click on the Run icon in the toolbar

Once a test is started, the Run icon changes from a running man with a green light to a red stop sign. After the
test has successfully run, the Run icon returns to a running man and the Run Status changes to Completed.
In addition, six new tab areas appear in the Test window. You can click on these tabs to view different aspects
of the test results. See Running a Test to understand what occurs while a test is run.

78

Chariot User Guide, version 3.1

Stop
You can stop a running test in two ways:
1.

Go to the Run menu and select Stop

2.

Press Ctrl+T

When you stop a running test, a warning box appears asking, A test is currently running. Do you want to stop
the test? Press Yes to stop the test or press No to resume the running of the test for more information about
stopping. See Stopping a Running Test for more information on stopping.
Set run options
You can choose parameters for how one test is run by selecting the Set run options item from the Run menu.
A two-page notebook is shown and contains a Run options and a Datagram page.
Poll endpoints now
You can cause the console to contact each of the Endpoint 1 computers in a test, while a test is running. The
endpoints reply, returning the number of timing records theyve created so far in this test. You can poll the
endpoints while a test is running in three ways:
1.

Choose Poll endpoints now under the Run menu

2.

Press F5

3.

Click the Poll icon on the toolbar.

See Polling the Endpoints for information about why youd choose to poll during a running test.

Changing the Run Options


Chariot gives you three ways to decide how a test run completes, and two ways for the endpoints to report their
results. Other run options affect what is measured in a test and how failures are handled.
You can vary these run options from test to test. They are saved in the test file, along with the endpoint pair
information.
How to end a test run
This section offers you three ways to determine when a test run is over. The choices are:
Run until any script completes
Chariot stops the test run when any endpoint pair completes executing its script. Any timing records
received after this first pair completes its script are discarded. This ensures that all timing records used in
the calculations were generated while the other endpoint pairs were still executing scripts.
Sometimes a pair running a streaming script will complete with the number of timing records less than
what you have specified in the script. This is because some data may have been lost (See The Lost Data
Tab for more information). A timing record is complete when the endpoint has received enough data to
fill (and sometimes overflow) a timing record. One timing record may contain lost data as well as
successfully received data. In this case, the bytes sent by Endpoint 1 will be a larger value than if no data
had been lost. This can cause the pair to complete without Endpoint 2 having completed the total number
of timing records for which data was sent.
Some endpoint pairs can run much faster than others, depending on the script variable values, CPU
speeds, and network equipment. You may find that some pairs have completed all their timings before
other pairs have even reported once. You should experiment to get a good balance among pairs.

Operating the Console

79

Run until all scripts complete


All endpoint pair scripts are allowed to run to completion.
There is a possibility of generating misleading data using this option. The problem occurs as scripts finish
and there is less competition for available bandwidth. In fact, the last executing script could have the
network all to itself and report much better results than if other scripts were still executing.
This option is recommended only if the endpoint pairs do not share the same network resources, that is,
they use different routers and physical media.
Run for a fixed duration
All endpoint pair scripts run for a fixed period of time, ignoring their number_of_timing_records in their
output loop. At the end of the period, the endpoints stop and return their results. You can choose values
from 1 second to 99 hours, 59 minutes, and 59 seconds. We recommend 2 to 5 minutes for most
performance testing.
This is the recommended option for most performance testing.
You may set a much longer duration for stress testing. Application scripts generate roughly 50 timing
records per minute on a LAN, so setting the duration to long periods can give you an enormous number of
timing recordspotentially exceeding the storage capacity of some console computers. We recommend
tuning the inner loops of your scripts to generate timing records less frequently, as well as some
experimentation before running multi-hour, multi-pair stress tests. The run time duration is checked every
time an END_LOOP or END_TIMER command is executed. This may cause the actual run time to
slightly exceed the configured run time.
How to report timings
This lets you choose how the Endpoint 1 computers report their results. The two choices are:
Batch
Timing records are saved at Endpoint 1 and forwarded to the console at the end of the test. Results are not
displayed until the test completes or until 500 timing records have been collected. This keeps network
traffic from timing records from interfering with the actual performance measurements. If you want to
find out the progress of the test, press the Poll button.
Batch reporting is always the recommended option for any performance testing.
Real-time
Every time a timing record is created, it is sent back to the console. The console updates the Test window
as the timing records are received, letting you see how the test is progressing. While this is handy for
verifying tests, real-time operation can have dramatic, negative effects on the test being run. Results are
updated at least every 5 seconds. The specific amount of time between updates depends on the number of
pairs in the test.
We strongly recommend doing your real measurements with batch timings. This avoids the extra network
traffic of real-time operation. This extra network traffic is doubled when executing a streaming script since
the timing records are sent from Endpoint 2 to Endpoint 1, who then forwards the records to the console.
If you are running a test containing multicast groups loopback in real-time, it may take several minutes for
the timing records from the test to be shown in the console. While real-time results look cool, they
consume resources in the network, at the endpoints, and at the console. You really dont want to use realtime for doing useful performance measurements. The worst case behavior of real-time is where you have
lots of endpoints and theyre each generating timing records frequently.

80

Chariot User Guide, version 3.1

How to handle failures


Three run options give you some control in failure-prone networks:
Stop run on initialization failure
When you start a test, youre never completely sure whether all the endpoints can be reached. If your test
involves many different endpoints, you may want to run the test even if some of the endpoints are
unavailable.
Checking the Stop on initialization failure box causes Chariot to stop the run when any endpoint cant
pass all the initialization steps. If you leave the box unchecked, the test will be run if at least one endpoint
pair can be initialized. Those endpoints that cannot be initialized are omitted from the results.
Connect timeout during test
You may be testing in noisy networks, where long connections are frequently dropped. Chariot can retry
its connection attempts for the number of minutes you specify here. If that amount of time elapses and a
connection could not be established, Chariot declares a connection failure and issues the appropriate error
message.
A value of 0 minutes means no retries are attempted; connection failure is declared after the first
unsuccessful connection attempt. This was the only option available in Chariot v2.0 and earlier.
This timeout tracks errors encountered on CONNECT_INITIATE commands in a script; errors that occur
on SEND, RECEIVE, or other commands will still cause a running test to stop. Thus, this timeout is most
helpful in scripts with short connections.
Stop test after running pairs fail
You may be developing a test where you want to let some pairs fail during the execution of the scripts, but
you want the remainder of the pairs to continue executing their scripts. Chariot uses this value when the
test enters the running state. Once in running state, Chariot lets the specified value of pairs fail before
terminating the test.
Polling endpoints
You can choose to have the endpoints polled on a regular period, and set this period in minutes. See
Polling the Endpoints to determine whether you need to poll.
Collect endpoint CPU utilization
Select this box for Chariot to collect CPU utilization data for the endpoint computers executing the test.
CPU utilization is the percentage of available CPU time on a computer spent executing all the processes on
the machine. The CPU Utilization percentage is shown in the Percent CPU Utilization of E1 column and
the Percent CPU Utilization of E2 column on the Raw Data Totals Tab of the Test window. These
columns are only shown if this checkbox is selected. The % CPU Utilization values are shown only after a
pair completes. While the test are still running, n/a is shown.
This percentage is an approximation based on CPU utilization samples taken during the test. Sampling is
started during the first CONNECT or ACCEPT command in the script and continues until the script is
complete. After the script completes, the average of the samples are calculated and reported back to the
console.
For machines with more than one CPU, Chariot calculates the CPU utilization percentage by adding
together the percentage for each CPU and then dividing this amount by the number of CPUs. For
example, if a computer has two CPUs and one CPU is 50% utilized and the second CPU is idle, the CPU
utilization of the process is calculated as (50% +0%)/2 =25% CPU utilization.
CPU Utilization is supported on the following operating systems:

AIX, version 4.1.4


Digital UNIX Alpha, version v4.0B
HP-UX, version 10.10 or later
Linux, version 2.0

Operating the Console

81

Novell NetWare, version 5.0, 4.x, or 3.12


OS/2 Warp 4, OS/2 Warp Connect 3; or OS/2 version 2.11
SGI IRIX, version 6.2
Sun Solaris SPARC, version 2.4 or later
Sun Solaris x86, version 2.4 or later
Windows 95
Windows 98
Windows NT x86
Windows NT Alpha
CPU Utilization is not supported on the following operating systems:

MVS (all versions)


Novell NetWare 3.12
OS/2 V3 or OS/2 V2
SCO UnixWare (all versions)
Windows 3.11
Windows NT 2000
For Novell NetWare machines which have more than one CPU in them, we return the CPU Utilization
value for the first processor only.
Validating received data
Check this box if youd like all the endpoints in the test to validate the correctness of each byte they
receive. In some test environments you may not be sure if the data is being transferred correctly from one
endpoint to another. Endpoints can now validate that what they receive is what they expected to receive.
Choosing this option obviously slows the performance measured at the endpoints. This function should be
used for network stress testing and for testing of new hardware and software.
Using a new seed for your random variables
The SLEEP script variable lets you pause for a random time period. The sleep durations are based on
random numbers, which are generated based on a seed. If you use the same seed on consecutive runs,
the random sleep durations are generated in the same sequence.
Thus, use the same seed when you are trying to get the same sleep durations, run after run. Set this box
checked if you want the sequence of random sleep durations to be different on each run.
You can also change your datagram parameters for a test, from the default values you set in the User
Settings notebook.

Running a Test
Before running a test, the endpoint programs for the pairs in your test must be active, as well as their
underlying network software.

Be sure the underlying network software for the protocols you are using is configured and active at the
console and each endpoint in the test. It is probably best to start the network software at boot time.

Be sure the Endpoint program is active on every endpoint participating in the test. When the endpoint
program is running (and its output is visible), it shows whether it is successfully accessing the underlying
network protocol.

There are two ways to run a test from the console. You can either use the graphical user interface of the
Chariot console program or use the command-line program named RUNTST. Both use the same underlying
software, so the test results from each are the same.

82

Chariot User Guide, version 3.1

See RUNTSTRunning Tests on page 111 in the Using the Command-Line Programs chapter for details on
running a test from the command line.
More typically, youll run tests using the graphical console program. In a Test window, select the Run item
from the Run menu of press the Run button.
Before running a test, we recommend saving your test setup. Test runs involve complex, extended interaction
with multiple computers and network programs. Unexpected errors can possibly cause problems where a run
cannot complete, and thus the test cannot be saved. Saving the test before the run lets you avoid recreating the
setup you did of endpoint pairs and scripts.

Polling the Endpoints


When you poll the endpoints during a running test, the console sends a message to each of the Endpoint 1
computers in the test. The endpoints return the number of timing records theyve generated so far for this test.
There are two good reasons to poll the endpoints during a running test:

Youre running the test in Batch operation, and youd like to know how many timing records have been
generated. The endpoints return just the number of timing records; they dont send the timing records
theyre holding, but havent yet sent.

You suspect that one or more Endpoint 1 computers can no longer be reached. If Endpoint 1 is powered
off during a test, the console never actually knows, since it doesnt maintain a connection while a test is
running. Polling forces the console to reach each Endpoint 1.

You can adversely affect your test results by polling too frequently.
The more endpoints involved in a test, the less often Chariot refreshes status changes in the Test window. This
is done to reduce the overhead of updating records at the console. For large tests, this refresh period can be as
long as five seconds. The true status of the endpoint (polling or running) may not be reflected for several
seconds, even though the run status has changed while the test is running.

Stopping a Running Test


When you stop a test, the console sends a message to Endpoint 1 of each pair, directing them to stop executing
their scripts and to return any completed timing records that they havent yet sent. Stopping goes quickly,
except when:
1.

A script is sending a large amount of data inside a transaction. The endpoint does not stop until it reaches
either an END_LOOP or END_TIMER command.

2.

There are errors at an endpoint.

3.

There is excessive network congestion.

For either of these last two, you may wait indefinitely.


When you stop a test, Chariot shows a dialog with the progress: how many seconds have elapsed since you
chose to stop. If you think youve waited far too long for a test to stop, press the Abandon Run button, which
appears after 10 seconds. The console has asked the Endpoint 1 computers to stop, but they havent yet
returned all their timing records.
Abandon run is a severe action, to be used sparingly. It is possible that some endpoints are still finishing the
execution of a script, or that they are trying to send their timing records to the console. We recommend

Operating the Console

83

waiting several minutes before starting another test to the same endpoints, if youve abandoned a run. If, on a
subsequent run after abandoning a run, Chariot encounters endpoint errors or an endpoint does not respond,
you may need to restart the endpoint.
When a pair fails, Chariot informs all the other pairs to stop after their initialization step unless you have
specified on the Run Options for Chariot to not do this. If your test appears to be hung in initializing state
after a pair fails, it may because there is a network problem that keeps Chariot from detecting the failure. In
this case, you may need to stop the test manually.
You can stop a running test by:

Choosing the red stop sign icon on the toolbar, or


Selecting the Stop menu item in the Run menu, or
Closing the Test window with the running test, or closing the Main window.

Understanding the Run Status


Endpoint pairs go through a progression of stages. Their status at each stage is shown below. They are listed
in the order from lowest to highest, that is in the order they would appear if you chose to sort on Run status
in ascending order (see Changing the Way Pairs are Shown for information on selecting sort criteria in the
Sort window).
Resolving names
The console is determining the actual network addresses if domain names or aliases were supplied in the
test setup. See the Tips for Testing chapter on page 133 for more information.
Initializing
The console is contacting each of the Endpoint 1 computers, and sending each of them their script. Each
Endpoint 1 program splits the script in half, and forwards the proper half to Endpoint 2. There are three
steps in the initializing process.
1.

The console contacts Endpoint 1 and/or Endpoint 1 contacts Endpoint 2.

2.

Both Endpoint 1 and Endpoint 2 have been successfully contacted.

3.

The script commands are sent to Endpoint 1 and Endpoint 2.

Initialized
An individual endpoint pair reaches this stage when it has completed Initializing, and reported back to
the console. When all endpoint pairs reach this stage, the console issues the calls to start all the scripts
executing.
n/a
The status n/a means that either the test has not started running or the test has completed running but does
not have enough information to return the data.
Running
The scripts are running between the endpoints in each endpoint pair.
If youve set up the test to Run until any script completes, Chariot shows the estimated time remaining
in the status bar.

84

Chariot User Guide, version 3.1

Polling
The console can poll endpoints on a timed basis; you can also manually poll by pressing the Poll icon on
the toolbar. When you poll the endpoints during a running test, the console sends a message to each of the
Endpoint 1 computers in the test. An endpoint pair is in Polling status when it is returning the number of
timing records generated so far for this test.
Requested stop
The test is over; the console has sent a request to each endpoint pair to stop the script now executing. An
endpoint pair has the Requested stop status while the console waits to hear back from Endpoint 1 in each
pair that it is now stopping.
Stopping
This stage can be reached in three conditions:

Some endpoint pair completed and you had chosen to end the run when the first endpoint pair
completes or run for a fixed duration. That has occurred, and the console is stopping all the
remaining endpoint pairs.

Youve chosen Stop a running test from the Results window menu, or youve chosen to close the
window. The console is stopping all the running endpoint pairs.

An error occurred on one of the pairs, so the console is stopping the other active pairs.
Running tests are not stopped in the middle of a transaction; endpoints only stop after an END_TIMER
command. Stopping can take a long time if youre running a test with large SEND sizes (say, youre
simulating a file transfer or more than 10 million bytes over a LAN). See Script Command Descriptions
in the Working with the Script Editor section for more information about the END_TIMER and SEND
commands.
Stopping can also take between 20 and 50 seconds when running pairs using SPX on Windows NT, doing
loopback (both endpoints have the same address). If the endpoint is on a RECEIVE call, the protocol
stack can pause for almost a minute before returning.
Finished
The run has completed. If the test ran long enough so that results were generated, they are shown.
Error detected
At least one of the endpoints has detected an error, which it has reported to the console. Depending on
your Run Options settings, the console may now be trying to stop the other running pairs. Subsequent
timing records received by the console will be discarded.
Abandoned
The running test was stopped by you , then you pressed the Abandon Run button. This endpoint pair was
running, but the console has abandoned it without waiting for the remainder of its timing records. The
two endpoints may still be executing their scripts and attempting to send timing records back to the
consolewhich is now discarding them. After abandoning endpoints that you think were very busy, it is
best to wait about two minutes before starting another performance test.

The Window Menu


The Window menu lets you move among your open windows in Chariot.
Select Main window to bring Chariots main window to the foreground. If you are currently viewing the
Comparison Window, this menu item is not displayed.

Operating the Console

85

Select Comparison window to bring Chariots comparison window to the foreground. If you are currently
viewing the Comparison Window, this menu item is not displayed.
Each open test window (up to nine) is displayed on the menu. Select the name of the test to bring the Test
window to the foreground.

The Test Setup Tab


The Test Setup tab is shown when you create a new test. The columns in the right-hand side reflect the values
entered in the Edit Pairs dialog: the addresses of Endpoints 1 and 2, the protocol, service quality, and script to
be used between them, a comment about the pair, and then the information about the connection from the
console to Endpoint 1.
To edit an existing pair, double-click on its row in the Test window.
If the pair is a member of a multicast group, the Edit a Multicast Group dialog is shown.

The Throughput Tab


The Throughput tab is shown while a test is running or after a test has been run and has results.
For groups of pairs, the throughput minimum displayed in the Minimum column is the smallest value of all
the minimums for the pairs in that group; the maximum displayed in the Maximum column is conversely the
largest of the maximum values.
The Measured Time column shows the sum of the timing record durations for each pair. To see the
individual timing records for an endpoint pair, select its row, then choose the Timing records menu item from
the View menu (or right-click your mouse on a row).
The Relative Precision column gives you a feel for the consistency among a pairs timing records. The
Confidence Interval column and the Relative Precision column display n/a while a test is running; real
numbers are shown when a test is over (and there are at least 2 timing records).
If you choose to display this data using the bar graph with max/avg/min, the display shows (for each pair) the
maximum value at the top of the upper bar segment, the average value at the top of the middle bar segment,
and the minimum value at the top of the lower bar segment.

The Transaction Rate Tab


The Transaction Rate tab is shown while a test is running or after a test has been run and has results. The
transaction rate is the number of script transactions that are executed per second.
For groups of pairs, the transaction rate minimum displayed in the Minimum column is the smallest value of
all the minimums for the pairs in that group; the maximum displayed in the Maximum column is conversely
the largest of the maximum values.
The Measured Time column shows the sum of the timing record durations for each pair. To see the
individual timing records for an endpoint pair, select its row in the Test window, then choose the Timing
records menu item from the View menu.

86

Chariot User Guide, version 3.1

The Relative Precision column gives you a feel for the consistency among a pairs timing records. The
Confidence Interval column and Relative Precision column display n/a while a test is running; real
numbers are shown when a test is over (and there are at least 2 timing records).
If you choose to display this data using the bar graph with max/avg/min, the display shows (for each pair) the
maximum value at the top of the upper bar segment, the average value at the top of the middle bar segment,
and the minimum value at the top of the lower bar segment.

The Response Time Tab


The Response Time tab is shown while a test is running or after a test has been run and has results. The
response time is inverse of the transaction rate; it is the time, in seconds, needed for one transaction.
For groups of pairs, the response time minimum displayed in the Minimum column is the smallest value of all
the minimums for the pairs in that group; the maximum displayed in the Maximum column is conversely the
largest of the maximum values.
The Measured Time column shows the sum of the timing record durations for each pair. To see the
individual timing records for an endpoint pair, select its row in the Test window, then choose the Timing
records menu item from the View menu.
The Relative Precision column gives you a feel for the consistency among a pairs timing records. The
Confidence Interval column and the Relative Precision column display n/a while a test is running; real
numbers are shown when a test is over (and there are at least 2 timing records).
If you choose to display this data using the bar graph with max/avg/min, the display shows (for each pair) the
maximum value at the top of the upper bar segment, the average value at the top of the middle bar segment,
and the minimum value at the top of the lower bar segment.

The Lost Data Tab


The Lost Data tab is shown while a test containing pairs with streaming scripts is running or has been run and
has results. This tab shows high level information about streaming results. The Datagram tab shows
additional information about individual datagrams.
The Endpoint 1 Throughput field is a measure of the throughput as seen by endpoint 1. The E1 Throughput
field is only applicable to streaming pairs. This field is calculated by dividing the total number of bytes sent by
the total time. If there is no lost data for the pair then the Endpoint 1 throughput will be the same as the
throughput of the receiver. If data is lost then the Endpoint 1 throughput will be higher than the receiver's
throughput since more bytes were sent than received over the same period of time.
The Measured Time column shows the sum of the timing record duration for each pair. To see the individual
timing records for an endpoint pair, select its row in the Test window, then choose the Timing records menu
item from the View menu.
The Relative Precision column gives you a feel for the consistency among a pairs timing records. The
Confidence Interval column and the Relative Precision column display n/a while a test is running; real
numbers are shown when a test is over (and there are at least 2 timing records).

Operating the Console

87

The Lost Data graph for this tab shows the percentage of lost bytes for the selected pairs/groups over elapsed
time. This graph can show you at what time during the test the data was lost. If there is no lost data for the
selected pairs/groups, this graph is shown empty.
If you choose to show this data in a pie graph, the graph shows cumulative totals. A single group or pair shows
the same as multiple groups/pairs.

The Jitter Tab


The Jitter tab is shown after a test using RTP has been run and has results. You can see results on the jitter
statistics. Jitter support is only provided for a test using the RTP protocol. For more information on Jitter, see
Understanding Jitter Measurements in the Working with Datagrams and Multimedia Support chapter on
page 41.
To see the individual timing records for an endpoint pair, select its row in the Test window, then choose the
Timing records menu item from the View menu.
The Average column shows the average of the jitter statistics in milliseconds for the test. The jitter statistic of
each timing record in the test are added together and divided by the total number of timing records for the test.
The Minimum column shows the lowest jitter statistic for an individual timing record in the test and the
Maximum column shows the highest jitter
Jitter totals can be shown on a group basis. The group total for jitter is the average of the average jitter for
each pair. Total jitter is not aggregate, it is similar to response time.
Because jitter statistics are not part of a whole, you cannot view a pie graph for jitter data. To view jitter
statistics to determine the jitter has exceeded your thresholds, use the histogram graph. You can easily see the
ranges of the jitter values and determine if the amount of jitter in the test exceeds the thresholds.

The Raw Data Totals Tab


The Raw Data Totals tab is shown while a test is running or after a test has been run and has results. This lets
you see the number of timing records and transactions for each pair, as well as the number of bytes sent and
received by Endpoint 1.
The Measured Time column shows the sum of the timing record durations for each pair. To see the
individual timing records for an endpoint pair, select its row in the Test window, then choose the Timing
records menu item from the View menu.
The Relative Precision column gives you a feel for the consistency among a pairs timing records. The
column shows n/a while a test is running; real numbers are shown when a test is over (and there are at least
2 timing records).
The Percent CPU Utilization of E1 column and Percent CPU Utilization of E2 column show the CPU
utilization percentage for the duration of the test for the endpoint pair. These columns are only shown if the
Collect Endpoint CPU Utilization checkbox on the Run Options dialog is selected. Each column shows n/a
when the CPU value has not yet been returned from the endpoint. The column shows Not supported if the
endpoint does not support CPU utilization. See Collect endpoint CPU utilization on page 80 for information
on how Chariot calculates the CPU utilization percentage.

88

Chariot User Guide, version 3.1

The Endpoint Configuration Tab


The Endpoint Configuration tab is shown while a test is running or after a test has been run and has results.
This lets you see the software run at the endpoints in your test. The first four columns in each row show
information about Endpoint 1; information about Endpoint 2 is shown in the next four columns.

Endpoint Configuration Details Dialog


This dialog shows detailed network information about each of the endpoints in this pair. To access this dialog,
select the Show Endpoint Configuration menu item from the View menu. See Endpoint Configuration
Details in the Viewing the Results chapter on page 127 for an example.
Youll see Not available when youre connected to endpoints from Chariot version 1.x; they dont have the
code needed to respond to the consoles query.

The Datagram Tab


The Datagram tab is shown while a test is running or after a test containing datagram pairs has been run and
has results. You can see details on the handling of datagrams. Datagram support is only used for tests using
the IPX, RTP, or UDP protocols; for other protocols, n/a is shown. Tests using IP Multicast are slightly
more complex.
The Total DGs Sent by E1 column shows the total number of datagrams sent by Endpoint 1. This column is
shown for tests using either a streaming or non-streaming script. For non streaming scripts, this column
includes datagrams that were retransmitted.
The Duplicate DGs Sent by E1 column shows the number of datagrams Endpoint 1 had to retransmit because
it didnt receive an acknowledgment from Endpoint 2 before the Retransmission Timeout period expired. The
number of duplicates sent is the number of duplicates received plus the number of datagrams lost. This column
is only applicable for tests using a non-streaming script.
The Total DGs Received by E1 column shows the number of datagrams, both original and retransmitted,
received by Endpoint 1. In an ideal network setting, where no datagrams are lost or delayed, the number
shown in the Total DGs Received by E1 column will be the same as the number shown in the Total DGs
Sent by E2 column. Both of these columns are only applicable for tests using non-streaming scripts.
The Duplicate DGs Received by E1 column shows the number of datagrams with the same sequence number
that were received by Endpoint 2.
The DGs Lost, E1 to E2 column shows datagrams sent from Endpoint 1 to Endpoint 2 that were lost and not
received by Endpoint 2.
The number shown in the Datagrams lost, E1 to E2 column is an approximation: some of these lost
datagrams sent by Endpoint 1 may have been merely delayed in the network and would have been received by
Endpoint 2 given enough time. Once a script completes, however, theres no longer any need for Endpoint 2 to
wait to receive those datagrams. This is only for non-streaming scripts.
The Total DGs Sent by E2 column shows the total number of datagrams sent by Endpoint 2. This column is
applicable for tests using non-streaming scripts. For non-streaming scripts, this includes datagrams that were
retransmitted.

Operating the Console

89

The Duplicate DGs Sent by E2 column shows the number of datagrams Endpoint 2 had to retransmit because
it didnt receive an acknowledgment from Endpoint 1 before the Retransmission Timeout period expired. The
number of duplicates sent is the number of duplicates received plus the number of datagrams lost. This column
is only applicable for tests using non-streaming scripts.
The Total DGs Received by E2 column shows the number of datagrams received by Endpoint 2. This column
is shown for tests using either a streaming or non-streaming script.
The Duplicate DGs Received by E2 column shows the number of datagrams with the same sequence number
that were received by Endpoint 2. This column is applicable for streaming or non-streaming tests.
The number shown in the Datagrams lost, E2 to E1 column is an approximation: some of these lost
datagrams sent by Endpoint 2 may have been merely delayed in the network and would have been received by
Endpoint 1 given enough time. Once a script completes, however, theres no longer any need for Endpoint 1 to
wait to receive those datagrams.
The Datagrams Out of Order column shows the number of the datagrams that are received out of sequence.
This column is only applicable for tests using streaming scripts.
Here are some hints and tips for interpreting the data shown on this tab:

If the number shown in the Duplicate Datagrams Received by E1 column is large as a percentage of
Total Datagrams Received by Endpoint 1 column, consider setting the Retransmission Timeout higher,
to prevent Endpoint 2 from retransmitting too often. However, if datagrams are being lost, changing this
will only increase the number of duplicate datagrams.

If you are using a non-streaming script and the number shown in the Datagrams lost, E1 to E2 column is
too large, the Window Size parameter in the Datagram Run Options is too large or the network is being
used by too many applications at the same time.

If the test is using a streaming script and there is a large number of duplicates at Endpoint 2, there is
probably a problem in the network configuration, such as a loop. Look at the configuration of network
elements, such as a router, to find the problem.

If you are using a streaming script and there is a high number of lost datagrams in the test, changing the
data rate to a slower data rate. The sender may be sending the datagrams faster than the receiver can
receive the data.

Examining Your Timing Records


You can display the Timing Records dialog while a test is running or after a test has been run and has results.
This section discusses viewing timing records for tests that use Non-Streaming Scripts and tests that use
Streaming Scripts.
Select a pair in the Test window, then choose the Timing records menu item from the View menu. This dialog
shows each of the individual timing records for this endpoint pair. For detailed information on each column,
see the Viewing the Results chapter on page 117.
Press the Refresh button to update the contents while a test is running; newly-received timing records are
added to the bottom of the list. Press the Show latest button to both refresh the contents and scroll to the last
timing record.

90

Chariot User Guide, version 3.1

Non-Streaming Scripts
For pairs that did not run a streaming script, the following columns are shown in the Timing Records dialog.

Record Number: number of the timing record


Elapsed Time (sec): the time since the test began, when each timing record was cut
Measured Time (sec): the seconds measured by a single timing record
Inactive Time (sec): the seconds when nothing was being measured at Endpoint 1
Throughput: throughput viewed by Endpoint 1
Transaction Rate (#/sec): transaction rate, in seconds
Response Time (sec): response time, in seconds
Transaction Count: number of transactions
Bytes Sent by E1: number of bytes sent by Endpoint 1
Bytes Received by E1: number of bytes received by Endpoint 1

In addition, if a datagram protocol is used for an endpoint pair, datagram statistics are shown on the far right
side.

Streaming Scripts
For pairs that ran a Streaming script, the following columns are shown in the Timing Records dialog.

Record Number: number of the timing record


Elapsed Time (sec): the time since the test began, when each timing record was cut
Measured Time (sec): the number of seconds measured by each individual timing record
Inactive Time (sec): the number of seconds when nothing was being measured
Throughput: throughput viewed by Endpoint 1
Bytes Sent by E1: number of bytes sent by Endpoint 1
Bytes Received by E2: number of bytes received by Endpoint 1
Bytes Lost, E1 to E2: number of bytes of data lost, as seen by Endpoint 2
Jitter Total: for RTP pairs only, total jitter value
Total DGs Sent by E1: number of datagrams sent by Endpoint 1
Total DGs Received by E2: number of datagrams received by Endpoint 2
DG Lost, E1 to E2: number of datagrams lost, as seen by Endpoint 2
DG Out of Order: datagrams received by Endpoint 2 that were received out of order

Operating the Console

91

Keys Help for the Test Window


You can use the following keys and key combinations in any Chariot Test window instead of using the mouse.
F1

get help for the Chariot Test window

F2

get an index of all the available Chariot help topics.

F5

poll the endpoints during a running test.

F9

show the keys and key combinations available in a window.

F10

get information about how to use operating system help.

F11

get the About Chariot dialog, which shows your version and build level, and lets you get
product support information.

Ctrl+A

select all the pairs in a test.

Ctrl+C

copy the test setup for one or more pairs to the clipboard.

Ctrl+D

delete the highlighted endpoint pair. You are asked whether you are sure you want to delete
that pair.

Ctrl+E

edit the highlighted endpoint pair (s). This causes Chariot to show the dialog box with all
the information about the highlighted endpoint pair (if just one is selected), or to appear with
blank fields (if more than one are selected). You can change any of the names, addresses, or
other values associated with the pairs.

Ctrl+G

add a new multicast group.

Ctrl+N

renumber the pairs in a test.

Ctrl+P

add a new endpoint pair to a test. If youre working with a test that already has results and
you attempt to add a new pair, Chariot asks you whether you want to discard your existing
results.

Ctrl+R

run this test. Only one test can be run at a time, to avoid conflicting performance data.

Ctrl+S

save this test setup and its results to a Chariot test file. If the test is untitled, the Save As
dialog prompts you to choose a filename.

Ctrl+T

stop a running test.

Ctrl+V

paste the test setup for one or more pairs from the clipboard.

Ctrl+X

cut the test setup for one or more pairs to the clipboard.

Alt+F4

this key combination can be used to close any window or dialog box. When used to close a
dialog box, it has the same effect as pressing the Esc key or selecting Cancel with the mouse.

In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.

92

Chariot User Guide, version 3.1

The Comparison Window


You can compare the results from multiple tests in the Comparison window. You can view any specific or
summary information about tests currently displayed in a Test window.
To compare tests, open the tests you want to compare and verify that the tests have been run. Select the
Compare Tests menu item from the Tools menu on the Main window. The Comparison window shows the
results of all the tests open in Chariot.
The markings for graphing or expansion selected in the Test window are not mirrored in the Comparison
window so you can manipulate results in the Comparison window without affecting the way the original tests
are shown. Open the Graph Configuration to show the Graph Content at the Test level and mark the test-level
items for graphing. The resulting graph lets you compare the tests overall results.
You can also use similar techniques to compare specific groups or pairs across multiple tests. You can perform
comparisons with raw data numbers by collapsing and expanding the appropriate tests and groups in the upper
part of the Comparison window.
You can only open one occurrence of the Comparison window per computer. You cannot modify tests or run
tests from the Comparison window. To modify a test, access the Test window for the test.
To remove a test from the Comparison window, close that particular test by going to the Test window for the
test and selecting the Close menu item from the File menu. When you return to the Comparison window, the
test is not shown.
If a test is currently running in a Test window, the Comparison window shows running as the run status and
the results are not updated on the Comparison window until the test completes. If an open test does not
currently have results associated with the test, n/a displays in the result columns.
A Comparison window is partitioned into areas accessible by tabs. The Test setup lets you view information
about the test and the other tabs let you view the results of a test. Pressing the Throughput, Transaction Rate,
Lost Data, Multimedia, or Response Time tab displays the corresponding graph. Choosing one of the other
four tabs causes the pair information in the top portion of the Comparison window to change; the graph display
at the bottom remains the same.
The toolbar icons offer shortcuts to commonly-used operations. Press the disk icon on the far left side of the
window to save the current comparison. The next icon copies pairs to the clipboard. The next seven icons
provide the most common grouping for endpoints; the two icons to their right let you expand or collapse all
groups and tests. Pressing the Help icon on the right side of the menu gives assistance for the Comparison
window.
You can access a shortcut Edit menu by right-clicking the mouse button when a pair is selected. To display a
shortcut menu containing the Graph Configuration and Throughput Units menu items, right-click the mouse
button from the graph section of the Comparison window.

Operating the Console

93

Heres where to get more information:


Menu items:

The File menuto open, save, print, or export


The Edit menuto copy pairs, mark, unmark, select/deselect
The View menu to change how the pairs are grouped and displayed or to change Tabs
The Help menuto get more information
The Window menuto switch between Chariot windows

Tabs:

Test Setup Tabchange pairs in a test


Throughput Tabshows throughput results
Transaction Rate Tabshows transaction rate results
Response Time Tabshows response time results
Lost Data Tabshows lost data results
Jitter Tabshows jitter data results
Raw Data Totals Tabshows byte counts results
Endpoint Configuration Tabshows details about each endpoint results
Datagram Tabshows datagram details (for IPX, RTP and UDP) results

The File Menu (Comparison Window)


The menu items on the File menu let you work with your comparison entries. You can save your comparison
setup if you need to create a repeatable comparison among a set of tests.
Select the Print menu item to print any aspect of the comparison. You can print the comparison of the tests or
the graphs associated with the comparison.
You can export formatted results to

an ASCII text file,


a Web page file, in HTML format,
a comma separated values, in CSV format or
a spreadsheet file, in WK3 format.

See the File Types and How They Are Handled section on page 108 for information on how these files are
handled.
Select the Close menu item to exit the Comparison window.

Saving a Comparison
From the Comparison window, you can save the comparison you are currently viewing. Chariot saves all the
titled tests filenames (with directory location), the current grouping settings, sorting settings, graphing
settings, current notebook tab, and the current throughput units being used. Untitled tests are not stored in the
comparison.
If you want to save a comparison for the first time or want to save an existing comparison under a different
name, select the Save Comparison As menu item from the File menu. The Save Comparison As dialog
appears. In the Save Comparison field, enter the name you want to save the comparison under. You can also
select an existing comparison name. The special characters *,\, and ? are not allowed in a comparison name.
To save the comparison, press the OK button.

94

Chariot User Guide, version 3.1

If you want to save a previously saved comparison under the same name, select the Save menu item from the
File menu. Chariot saves the open comparison under the current name. If you expand a test or mark pairs in
the Comparison Window, the Save menu item is disabled. The Comparison Window does not save expanded
tests or marked pairs.

Opening a Comparison
From the Comparison window you can open a previously saved comparison.
From the File menu, select the Open Comparison menu item. The Open Comparison dialog opens.
In the Select name of the comparison configuration you would like to open field, select the name of the
comparison you want to open from the list.
To open the selected comparison, select the OK button. Chariot closes all open tests windows that are not part
of the comparison and then opens a Test window for each test in the comparison. The Comparison window
containing the selected comparison is displayed.

The Edit Menu (Comparison Window)


The menu items on the Edit menu let you work with the items displayed in the Comparison window. You can
copy pairs displayed in the Comparison Window and then paste the pairs into a Test window. You can also
select and deselect multiple pairs from this window.
You can access a shortcut Edit menu by right-clicking the mouse button while a pair is highlighted.
Copy
You can copy an existing pair or group of pairs from the Comparison window. First, select the pair(s) to be
copied by clicking on the individual pair. Once selected, the pair is highlighted. When copying three or more
pairs, you can hold down the Shift key and click on the first and last pairs to be copied. The two pairs on
which you clicked as well as all pairs in between are highlighted. Once selected, you can copy pairs in one of
three ways:
1.

choose Copy under the Edit menu

2.

press Ctrl+C

3.

use the Copy icon (two paper sheets) on the toolbar

After you have copied a pair, you can paste the pair into a Test window. When you access a Test window, the
Paste menu item under the Edit menu and the Paste icon on the toolbar are now available. You cannot paste
pairs into the Comparison window. After you have copied a pair, you can also paste text about the pair into a
text window such as Microsoft Notepad.
Select all pairs
To select all of the pairs in the Comparison window, go to the Edit menu and select Select all pairsor press
Ctrl+A. All of the pairs in the Comparison window are highlighted to indicate that they are selected.
Deselect all pairs
To deselect all of the highlighted pairs in the Comparison window, go to the Edit menu and select Deselect all
pairs. All pairs that were previously highlighted are no longer selected.

Operating the Console

95

Mark selected items


You can mark any pair, group, or test shown in the Comparison window. Mark a pair, group, or test when you
specifically want to include it in a graph or in a printed report. See Graph Configuration in The Test
Window section.
Choose this menu item to mark all the items that are currently shown as selected (highlighted) in the
Comparison window. The indicator (a black dot) that an item is marked is shown in the second column on the
left-hand side of the Comparison window.
Unmark selected items
In contrast to marking pairs, groups, or tests (as discussed above), you can unmark any item shown in the
Comparison window. Unmark an item when you specifically dont want to include it in a graph or printed
report.
Choose this menu item to unmark all the items that are currently shown as selected (highlighted) in the
Comparison window.

Keys Help for the Comparison Window


You can use the following keys and key combinations in any Chariot Comparison window, instead of using the
mouse.
F1

get context-sensitive help for the Chariot Comparison window.

F2

get the help table of contents and an index of all the available Chariot help topics.

F3

close the Comparison window

F9

show the keys and key combinations available in a window.

F10

get information about how to use operating system help.

F11

get the About Chariot dialog, which shows your version and build level, and lets you get
product support information.

Ctrl+A

select all the pairs in a comparison.

Ctrl+C

copy the test setup for one or more pairs to the clipboard.

Ctr+O

open a comparison you have previously saved.

Ctrl+S

save this comparison. If the comparison is untitled, the Save As dialog prompts you to
choose a filename.

Alt+F4

this key combination can be used to close any window or dialog box. When used to close a
dialog box, it has the same effect as pressing the Esc key or selecting Cancel with the mouse.

In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.

96

Chariot User Guide, version 3.1

Working with the Error Log Viewer


Whenever one of the console programs encounters a problem, it logs the problem information into an error log
file at the console. Similarly, whenever one of the endpoint programs encounters a problem it cant report to
the console, it logs that problem to an error log file at the endpoint. You can then open the error log file in the
Error Log Viewer.
Chariot defaults to writing the console error log file to the directory where the Chariot program is installed.
You can change the location of this error log, by entering a new location in the Where to write console error
logs field on the Directories tab of the User Settings notebook. The RUNTST and CLONETST error log at the
console are always written to the directory where Chariot is installed. The endpoints error log is written to the
directory where the endpoints are installed.

The console writes to the file CHARIOT.LOG


RUNTST writes to the file RUNTST.LOG
CLONETST writes to the file CLONETST.LOG
Endpoints write to the file ENDPOINT.LOG

To open the Error Log Viewer, select the View Error Log menu item from the Tools menu on the Main
window.
The Error Log Viewer shows the record number of the entry, the date and time, the detector of the error and a
brief description of the error.
You can select the criteria for the entries that you want to view. See Filtering the Entries Displayed in the
Error Log Viewer for more information.
If you need to view an error log on an endpoint computer, use the FMTLOG program to format the binary error
log. See FMTLOGFormatting Binary Error Logs in the Using the Command-Line Programs on page 116
for more information.
Menu items in the Error Log Viewer:

"The File Menu (Error Log Viewer)" on page 97: to open a log, save a log, or exit the Error Log Viewer.

"The View Menu (Error Log Viewer)" on page 97 : to select the order of the entries shown in the Error
Log Viewer, select which entries to filter, search for a specific word or phrase, or view detailed
information about the entry.

"The Options Menu (Error Log Viewer)" on page 98 : to save settings on exit, wrap the log, or change the
Error Log Viewer font.

The Help Menuto get more information.

Operating the Console

97

The File Menu (Error Log Viewer)


To open an error log file, select the Open menu item. From the Open a Log File dialog, select the file you
want to open, and then press the Open button. The error log is shown in the Error Log Viewer.
You can save the information shown in the Error Log Viewer to a file by using the Save Filtered Log As menu
item. If you have selected the filtering option, the Error Log Viewer saves only the entries that meet the
current filtering criteria. From the Save Filtered Log As dialog, enter or select the file name and press the
Save button.
Select the Exit menu item to exit the Error Log Viewer.

The View Menu (Error Log Viewer)


You can view all entries shown in the Error Log Viewer or you can view only the entries that meet specified
criteria. To view all entries, select the All Entries menu item. To specify the criteria of the entries you want
to view in the Error Log, select the Filter Entries menu item from the View menu.
The Ascending and Descending menu items on the Sort by Record Number submenu let you view the entries
shown in the Error Log Viewer in either ascending or descending order.
To update the Error Log Viewer, select the Refresh menu item. The Error Log Viewer reloads the error log
that you are viewing. Any newly-written entries are now shown in the Error Log Viewer.

Filtering the Entries Displayed in the Error Log Viewer


You can select criteria to determine which entries in the error log are shown in the Error Log Viewer. Select
the Filter Entries menu item from the View menu within the Error Log Viewer. The Entire Filter dialog is
shown.
The View From and View Through fields filter out records with certain date/time combinations. If you want
the Error Log Viewer to show the first entry generated, select the First option in the View From section. If
you want the Error Log Viewer to begin with an entry generated at a certain time, select the View From
option. In the Date fields, select the date of the first entry you want shown. In the Time fields, select the time
of the first entry you want shown.
If you want the last entry shown in the Error Log Viewer to be the last entry generated, select the Last option
in the View through section. If you want the Error Log Viewer to only contain entries generated before a
specific date and time, select the Entries on option. In the Date fields, select the date of the last entry you
want shown. In the Time fields, select the time of the last entry you want shown.
If you want the Error Log Viewer to only show entries containing a specific error message, select the Primary
Message checkbox in the Only show records with section. In the CHR field, enter the message number that
you want to view.
You can also select to view entries generated by a specific detector. You can view entries in the Error Log
Viewer that were detected by a specific source. In the Detector section, select the checkboxes for the sources
whose errors you want shown in the Error Log Viewer.

98

Chariot User Guide, version 3.1

Searching Within Error Log Viewer


You can search the entries shown in the Error Log Viewer to find a specific word or phrase. Select the Find
menu item from the View menu or press Ctrl+F. The Find dialog is shown.
In the Search for field, enter or select the word or phrase that you want to find. Note that the search is not
case-sensitive.
If you want to search from your current cursor position to the top of the Error Log Viewer, select the Up
option. If you want to search from your current cursor position to the bottom of the Error Log Viewer, select
the Down option.
If you want the Error Log Viewer to search through the entire error log if your cursor is located in the middle
of the error log, select the Wrap at end checkbox.
To begin the search, press the Find button. The first occurrence of the word or phrase in the Search for field
is highlighted. To find the next occurrence of the word or phrase, select the Find Next menu item from the
View menu or press Ctrl+G.

Viewing Details for an Error Log Viewer Entry


To view more information about an entry in the Error Log Viewer, highlight the entry and select the Detail
menu item from the View menu within the Error Log. You can also double-click on an entry. The Error Log
Viewer Details dialog for the selected entry is shown.
This dialog shows detailed information about the selected entry.
The Record field shows the order in which the error was detected.
To view help for the error message, press the Help for message button.
To view details about the next entry in the Error Log Viewer, press the Next button. To view details about the
previous entry in the Error Log Viewer, press the Previous button. If you have selected the Wrap log menu
item located on the View menu, the Error Log Viewer wraps. For example, if you selected the last entry in the
Error Log Viewer, you can view the first entry by pressing the Next button. If the Wrap Log menu item is not
selected, the Next button is disabled when you are viewing the details for the last entry and the Previous button
is disabled when you are viewing the details for the first entry.

The Options Menu (Error Log Viewer)


The Wrap Log menu item lets you move on the Details dialog from the last entry shown in the Error Log
Viewer to the first entry by pressing the Next button or from the first entry to the last entry by pressing the
Previous button. To enable this option, select the Wrap Log menu item. A checkmark is shown to the left of
the menu item. To disable this option, deselect the Wrap Log menu item. The checkmark is removed.

Operating the Console

99

You can save the following settings to be used the next time you access the Error Log Viewer by selecting the
Save Settings on Exit menu item:

Filtering Settings
Font Settings
Sort Order
Wrap Settings

The Error Log Viewer saves the settings for these items that you have set when you select the Exit menu item.
The Font menu item lets you change the fonts used in the Error Log Viewer.

Keys Help for the Error Log Viewer


You can use the following keys and key combinations in Error Log Viewer, instead of using the mouse.
F1

get help for the Error Log Viewer.

F2

get an index of all the available help topics.

F3

exit the Error Log Viewer.

F9

show the keys and key combinations available in a window.

F10

get information about how to use operating system help.

F11

get the About Error Log Viewer dialog, which shows your version and build level, and lets
you get product support information.

Ctrl+S

save an error log file.

Ctrl+F

search for a word in the Error Log Viewer.

Ctrl+G

find the next occurrence of the last word you searched for in the Error Log Viewer.

Ctrl+O

open an error log file.

Alt+F4

close any window or dialog. When used to close a dialog, it has the same effect as pressing
the Esc key or pressing Cancel with the mouse.

In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.

Working with the Script Editor


The Script Editor lets you create new scripts as well as change the existing ones. You can tailor scripts to
emulate the flows of particularly complex applications.
The Script Editor is included with Chariot and can also be accessed as a standalone product if you have
purchased the Application Scanner.

100

Chariot User Guide, version 3.1

If you want to use the Script Editor as a standalone product, you can run the Script Editor by selecting the
Script Editor icon in the Application Scanner Folder.
If you want to use the Script Editor from Chariot, you can access the Script Editor two ways.

If you want to edit a script and have the changes to the existing script used by all pairs, select the Edit
Scripts menu item from the Tools menu on the Main window. Also select this menu item if you want to
create a new script and you want the script to be available to all pairs.

If you want make changes to an existing script and have the option of saving the changes with a specific
pair, highlight the pair in the Test window and select the Edit menu item. From the Edit an Endpoint Pair
dialog, press the Edit this script button. The Script Editor is shown. Note that you can also save the
changes to a file that can be used by other pairs if you access the Script Editor from this dialog.

The main window of the Script Editor shows the commands for the script and a list of the scripts variables.
In the top half of the window, Endpoint 1s portion of the script is shown on the left; Endpoint 2s on the right.
These are sequential lists of the commands (and their parameters) to be executed by the endpoints. You can
look at a long script by scrolling through it. You can access a dialog that lets you edit the highlighted
commands parameters one of three ways:
1.

highlight a command and double-click

2.

select the Edit parameter menu item from the Edit menu

3.

highlight a command and click the right mouse button and then select the Edit menu item

The lower half of the window summarizes the script variables. You can access a dialog that lets you edit the
highlighted variable one of three ways:
1.

double-click on the highlighted variable

2.

select the Edit variable menu item from the Edit menu

3.

click the right mouse button and then select the Edit menu item

Commands in the File menu let you handle script files and exit the editor. The Edit menu contains commands
that operate on the currently selected script commands or variables. To insert a command in a script, select a
command (or group of commands) in the top half of the window, and then choose the command to be inserted
from the Insert menu. The new command is inserted after (or around) the selected command(s).
The Application Script Name field shows a brief (40 character) description of the script. This script name is
required; it is important for identifying the script in other Ganymede Software products. If you are creating a
new script, be sure to enter descriptive information.
The toolbar provides a shortcut to the most commonly used menu items. You can move variables up and down
in the list. The Swap icon lets you move the selected command to the other endpoint. Use the Insert icons to
insert commands into the script

Operating the Console

101

Editing a Parameter of a Script Command


Most script commands have parameters which you can tailor. To edit a parameter, select the parameter in the
top half of the Script Editor window and do one of the following:
1.

double-click on the parameter

2.

press the Enter key

3.

select the Edit Parameter menu item from the Edit menu

The Edit Parameter dialog is shown.


The Constant and Variable radio buttons let you define the type of parameter.
The Parameter field shows the parameter currently being edited. All of the parameters for the command are
available in the pulldown menu. If defined to be a variable, enter or select the name of the variable in the
Variable Name field. A list of appropriate variables previously defined in the script is available from the
pulldown menu. See Editing a Script Variable for more information on the variable fields.

Editing a Script Variable


Variables are used in scripts to let script command parameters be changed globally within a script. Variables
can be used to control LOOPs, define port numbers, specify the data type for a SEND, etc.
To edit a script variable, highlight the variable in the bottom half of the Script Editor window and do one of the
following:
1.

double-click on the parameter

2.

press the Enter key

3.

select the Edit variable command from the Edit menu

The Edit Variable dialog is shown.


The Variable name field must be unique within the script, and not contain blanks.
The Default value field lets you specify the initial value for the variable, when the script is loaded from file
into a test. The field accepts numbers to 999,999,999. On some variable types, such as the buffer size on
SEND and RECEIVE, you can use other values, such as the term DEFAULT or AUTO. The DEFAULT
value depends on the network protocol and the endpoints you are using. AUTO, when entered for the
port_number variable, specifies that Endpoint 1 should choose the port number.
The type of variable used for the SLEEP command allows five values: Constant Value, Uniform Distribution,
Normal, Poisson, and Exponential. For a Constant Value, one field is presented for the value. For a
distribution, two fields let you enter the upper and lower distribution range. All values are in milliseconds.
See the Messages and Application Scripts manual for more information on the distributions.
Enter a description of the variable in the Variable comment field.

102

Chariot User Guide, version 3.1

The Variable help field provides details about this variable and how to use it in the script. You can customize
the help text by entering information in this field.
Press the Reset button to return the value in the Current value field to the value that was in the Default value
field the last time you exited the Edit Variable dialog for this variable.

The File Menu (Script Editor)


The File menu lets you work with scripts. You can open a script in the Script Editor in two ways:
1.

select the Open menu item

2.

press Ctrl+O

The Open a Script dialog is shown. Select the script that you want to open. The script is shown in the Script
Editor. You can then modify the script and save the script.
You can exit the Script Editor in two ways:
1.

go to the File menu select the Exit menu item

2.

press F3

If you have modified the current script and not saved your changes, a message box is shown asking if you want
to save your changes to the current script.

Adding a New Script


You can create a new script based on script templates. From within the Script Editor, select the New menu
item from the File menu. The New Script dialog is shown. Select one of the following script templates:
Basic Long Connection
This template is a version of the Credit Check transaction that uses a long connection. This is a quick
transaction that emulates a series of credit approvals. A record is sent from Endpoint 1. Endpoint 2
receives the record and sends back a confirmation. Use this template as a starting point for most scripts or
if you are unsure of which template to use.
Basic Short Connection
This template is a version of the Credit Check transaction that uses short connections. This is a quick
transaction that emulates a series of credit approvals. A record is sent from Endpoint 1. Endpoint 2
receives the record and sends back a confirmation. Use this template to create a script that initiates a
connection for each transaction.
Empty
This template contains the minimum parameters required in script. The script template does not contain
any SEND or RECEIVE commands. Use this template as a starting point to create a script from scratch.
Streaming
This template contains the commands necessary for a streaming script. Because the script contains all of
the commands required for the purposes of a streaming script, you can not modify the structure of the
script. However, you can modify the variables in the script. Use this template for streaming scripts.
Press the OK button. The Script Editor shows the script template you selected.

Operating the Console

103

Enter a brief (40 character) description of the script in the Application Script Name field. This script name is
required; it a very important field in future versions of Ganymede Software productsbe sure to enter
descriptive information.
When editing script parameters, you can change their names and the variables included in the parameter. See
Editing a Parameter of a Script Command for more information.
When editing script variables, you can change their names, their current and default values, and their
comments. See Editing a Script Variable for more information on editing variables.
For a full description of the script commands and their parameters and the rules for governing the creation of
valid scripts, see the Messages and Application Scripts manual.
To save the script, select the Save menu item from the Tools menu.

Saving a Script
Saving Scripts from the Standalone Script Editor
If you are using the Script Editor as a standalone product and want to save your changes with the same script
file name, select the Save menu item from the File menu. If you have not previously saved this script, the
Save Script File As dialog is shown.
If you want to save your changes under a new file name, select the Save As menu item from the File menu.
The Save Script File As dialog is shown. Enter or select the filename.

Saving Scripts from within Chariot


If you access the Script Editor from within Chariot, you can save your modifications to a script on two different
levels; Pair Level and Script Level. If you save a script at the pair level, the modifications to the script or the
new script is only available by the pair you accessed the Script Editor from. If you save a script at the file level,
the modifications or the new script is available to all pairs.
You can only save your changes at the pair level if you access the Script Editor from the Edit a Pair dialog. If
you access the Script Editor from the Tools menu, you can only save your changes at the file level.
Pair Level
The Script Editor lets you modify a script and have those modifications only used by the pair the script was
attached to when it was modified. The modifications are not reflected in the version of the script attached
to other pairs.
You can save your modifications on the Pair level in two ways:
1.

go to the File menu and select the Save to pair menu item

2. press Ctrl+S
The Script Editor saves the modifications to the pair level. Note that the file name shown in the title bar
does not change. If you are saving a new script, Untitled is shown in the title bar.
File Level
The Script Editor lets you modify a script and have those modifications available to new pairs created after
you modified the script. If you previously associated the script with a pair, your modifications will not be
reflected in the version of the script associated with the pair. To have the modifications reflected in
existing pairs, you must reattach the script to the pair.

104

Chariot User Guide, version 3.1

If you access the Script Editor from the Edit a Pair dialog and want to save a script at the file level, select
the Save As menu item from the File menu. Enter or select the filename you want to save the script as and
press the OK button. The Script Editor saves the modifications to the script on the file level. The
filename of the script is shown in the title bar.
If you access the Script Editor from the Tools menu and want to save your changes with the same script
file name, select the Save menu item from the File menu.
If you access the Script Editor from the Tools menu and want to save your changes under a new file name,
select the Save As menu item from the File menu. The Save Script File As dialog is shown. Enter or
select the filename.

The Edit Menu (Script Editor)


The menu items on the Script Editor Edit menu let you work with the script shown in the Script Editor.
Undo
You can undo an unlimited number of previous actions in the Script Editor. This menu item is only available
when your last action was not selecting the Undo menu item. You can undo previous actions in two ways:
1.

go to the Edit menu select the Undo menu item

2.

press Ctrl+Z

Redo
You can reverse your last undo and return the script to the state before you selected the Undo menu item. This
menu item is only available when your last action was an Undo.
You can redo actions in two ways:
1.

go to the Edit menu and select the Redo menu item

2.

press Ctrl+E

Delete
You can delete commands and variables from a script. First, highlight the command or variable you want to
delete by clicking on the command or variable. Once selected, you can delete the command or variable in two
ways:
1.

go to the Edit menu and select the Delete menu item

2.

press the Delete key

The command or variable is deleted from the script and is not shown in the Script Editor.
Move Up
You can move variables up in the sequence of commands in a script. To keep the script valid, the Move Up
menu item and Move Up icon are only available when moving the highlighted variable up is a valid move. In
some cases, using the Move Up function may cause the highlighted variable to move up to the next valid place
in the script or may cause other variable to move.

Operating the Console

105

First, select the variable that you want to move up by clicking on the variable. Once selected, you can move the
variable up in three ways:
1.

go to the Edit menu and select the Move up menu item

2.

press Ctrl+Up

3.

press the Move up icon

Move Down
You can move variable down in the sequence of commands in a script. To keep the script valid, the Move
Down menu item and Move Down icon are only available when moving the highlighted variable down is a
valid move. In some cases, using the Move Down function may cause the highlighted variable to move down
to the next valid place in the script or may cause other variables to move.
First, select the variable that you want to move down by clicking on the variable. Once selected, you can move
commands up in a script in three ways:
1.

go to the Edit menu and select the Move down menu item

2.

press Ctrl+Up

3.

press the Move down icon

Swap Sides
You can move a command to the opposite endpoint or switch a command pair. For example, if you highlight
SEND/RECEIVE and use the swap functionality, the command pair is now RECEIVE/SEND. This
functionality is only available for certain commands.
First, select the command that you want to move to the other endpoint by clicking on the command. Once
selected, you can swap sides in three ways:
1.

go to the Edit menu and select the Swap sides menu item

2.

press Ctrl+W

3.

press the Swap sides icon

Edit Parameter
You can change a commands parameters and assign the command as either a constant or a variable. See
Editing a Script Variable for more information on editing parameters.
First, select the command that you want to edit by clicking on the command or a parameter for the command.
You can edit parameters in three ways:
1.

go to the Edit menu and select the Edit parameter menu item

2.

double-click with your mouse

3.

press the Enter key

The Edit Parameter dialog is shown.


Edit Variable
You can change a variables name, description, value, and default value. See Editing a Script Variable for
more information on editing variables.

106

Chariot User Guide, version 3.1

First select, the variable that you want to edit by clicking on the variable. You can edit the variable in three
ways:
1.

go to the Edit menu and select the Edit variable menu item

2.

double-click with your mouse

3.

press the Enter key

The Edit Variable dialog is shown.

The Insert Menu (Script Editor)


The Insert menu lets you insert the following commands into the script:
Script Command

Description

CONNECT

Creates a connection. Also inserts a DISCONNECT command.

SEND from Endpoint 1

Sends a buffer of the size and type you specified from Endpoint 1 and
receives data at Endpoint 2.

SEND from Endpoint 2

Sends a buffer of the size and type you specified from Endpoint 2 and
receives data at Endpoint 1.

FLUSH at Endpoint 1

Directs the Endpoint 1 protocol stack to flush its buffers of unsent


messages.

FLUSH at Endpoint 2

Directs the Endpoint 1 protocol stack to flush its buffers of unsent


messages.

CONFIRM from Endpoint 1

Requests an acknowledgment from Endpoint 2 that the previously-sent


data has been received.

CONFIRM from Endpoint 2

Requests an acknowledgment from Endpoint 1 that the previously-sent


data has been received.

SLEEP at Endpoint 1

Simulates a user or processing delay at Endpoint 1.

SLEEP at Endpoint 2

Simulates a user or processing delay at Endpoint 2.

LOOP

Repeats the commands between LOOP and END_LOOP.

Highlight the location in the script where you want to insert the command or the Group of Commands to insert
around. From the Insert menu, select the command you want to insert in the script.
All scripts must adhere to specific rules. See the Messages and Application Scripts manual for more
information. Only the commands that can be inserted at the selected location in the script are available.

Operating the Console

107

Script Editor Keys Help


You can use the following keys and key combinations as shortcuts in the Script Editor window, instead of
using the mouse.
Del

delete the highlighted variable from the script.

Enter

edit the currently-highlighted parameter on a command, or script variable (depending on


whether the focus is in the top or bottom portion).

F1

get help for the Script Editor window.

F2

get an index of all the available help topics.

F3

exit the window.

F9

show the keys and key combinations available in a window.

F10

get information about how to use operating system help.

F11

get the About dialog, which shows your version and build level, and lets you get product
support information.

Ctrl+E

redo the last operation to the scriptassuming youve just chosen Undo.

Ctrl+N

set up a new script. The New Script dialog is shown. You can add a new script based on
five templates.

Ctrl+O

open an existing script file.

Ctrl+S

save a script file, using the filespec shown on the titlebar. If the script is still untitled, the
Save Script File As dialog lets you choose a path and filename for the script.

Ctrl+W

swap the sides for the currently-highlighted script commands. That is, move the Endpoint
1 command to Endpoint 2, and move the Endpoint 2 command to Endpoint 1.

Ctrl+Z

undo the last operation.

Ctrl+
Down
Arrow

move the currently-highlighted script variable one row lower in the list of variables. You
cannot move the port_number variable from the bottom of the list.

Ctrl+Up
Arrow

move the currently-highlighted script variable one row higher in the list of variables. You
cannot move the port_number variable from the bottom of the list.

Alt+F4

this key combination can be used to close any window or dialog. When used to close a
dialog, it has the same effect as pressing the Esc key or pressing Cancel with the mouse.

In addition to these keys, you can use the Alt key in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.

108

Char iot User Guid e, ver sion 3.1

File Types and How They are Handled


Char iot u ses the following nami ng convention for file extensions, listed alp habetically.
File
Extension

File Description

.AUD

An ASCII file found at the endpoints. File ENDPOINT.AUD contains a record for
each time a test is started and stopped. The records are in comma-delimited format,
allowing easy input into spreadsheet programs.
For more information, see the SECURITY_AUDITING and AUDIT_FILENAME
keywords for the ENDPOINT.INI file described in your Network Performance
Endpoint manual.

.CSV

An ASCII comma separated file, generated at the console. This contains the results
of a run, in a format suitable for loading into a spreadsheet program such as Excel or
Lotus 1-2-3.
See Export Options for CSV file on page 65 for information on exporting the CSV
file format from Chariot.

.DAT

An ASCII file, kept in the directory where the Chariot console is started.
DEREGISTER.DATcontains the 3 fields that are saved when Chariot is deregistered:

registration number, previous license code, and deregistration key


ENDPOINT.DATa list of endpoint network addresses entered at the console
SERVQUAL.DATa list of APPC mode names and RTP/TCP/UDP Quality of Service

(QoS) templates
SPXDIR.DATa list of IPX addresses and their aliases
MCG.DATa list of IP Multicast groups

.ERR

An ASCII error log file generated internally by any of the Chariot programs. If the
file ASSERT.ERR is generated, this could indicate a program defect which may
affect the operation of Chariot. Keep a copy of the file and refer to the Ganymede
Software Customer Care chapter on page 163 for information on how to report the
problem.

Operating the Console

.GIF

A binary graphics file. If you export in HTML format and choose to export graphs
of your results, each graph is saved in a separate file. Files in the GIF format are
suitable for loading into many word processors, graphics applications, and Web
browsers.

.HTM

An ASCII file with HTML tags. This is the default file extension for exporting
tests and results for use as Internet Web pages. You can view a formatted page
with any Web browser that supports tables. Imbedded graphs are saved as separate
GIF files.

.INI

An ASCII file found at the endpoints. File ENDPOINT.INI determines the


capabilities of the endpoint, and what consoles can connect to it.

.LCL

A binary file that determines the time and date format, the comma separation
format, and the type of money symbol to use based on the language of the version of
Chariot. This file must be in the directory where Chariot is installed for Chariot to
run.

.LOG

A binary log file generated by the console, RUNTST, CLONETST, or any endpoint.
See the Working with the Error Log Viewer section on page 96 for more
information.

.SCR

A binary script file, containing the network calls and their script variables. This file
is protected from damage with a CRC checksum so it should not be modified, even
with a binary editor. Chariot 3.1 can write script files in version 3.1 format or the
older version 2.2 format.

.TST

A binary test file, containing endpoint pair definitions and their associated scripts,
and, optionally, the results of one run. This file is protected from damage with a
CRC checksum, so it should not be modified, even with a hex editor.

109

Chariot 2.2 can write test files in version 2.2 or 2.1 formats. Chariot 2.2 test files
cannot be read by Chariot 2.1 programs.
.TXT

An ASCII listing file. This is the default file extension when exporting tests and
results to a text file.

.WK3

A binary spreadsheet file, generated at the console. This contains the results of a
run, in a format suitable for loading into a spreadsheet program such as Excel or
Lotus 1-2-3. This file format will not be supported in the next release of Chariot.
You can also use the CSV file format to export tests to a spreadsheet program.

Only one program at a time can write to a Chariot test file, to ensure the integrity of the data in the file.
Chariot protects your files and ensures that while a test file is open, other programs cannot write to the file.
When script files are opened, they are read directly into the test being constructed or modified. A test file
contains a separate script for each endpoint pair, allowing full flexibility in the choice of script variables.
Scripts are stored in a compact, binary format, so script files and test files (without results) are rarely large.
Test and script files are protected from damage by a checksum. This makes modifying them with a binary or
hexadecimal editor impractical.

110

Chariot User Guide, version 3.1

Using the Command-Line Programs

111

Using the Command-Line Programs


Chariot contains four programs that produce command-line text output. These let you do many of the console
functions (except creating and changing tests), from a command prompt at the computer where you installed
the console. You can combine these programs to execute tests from inside batch files, intermixed, for example,
with other programs. This chapter discusses these command-line programs:

"RUNTSTRunning Tests" on page 111


"FMTTSTFormatting Test Results" on page 112
"CLONETSTReplicating Pairs in a Test" on page 114
"FMTLOGFormatting Binary Error Logs" on page 116

Each of these commands writes information to the screen, using stdout. You can redirect this information to a
file, using the > or >> operators. If you choose to redirect the output to a file, you can print the file or
manipulate it with an ASCII text editor.
See the Chariot Programming Reference for information on the Chariot API which provides you with the
ability to automate testing.

RUNTSTRunning Tests
The program named RUNTST lets you run test files created by the Chariot console program.
Heres the syntax of the RUNTST command:
RUNTST test_filename [new_test_filename] [-tN]

Enter RUNTST at a command prompt on the computer youre using as the console.

As its first parameter, supply the filespec of a Chariot test file.

Its second parameter is optional; you can supply a separate filespec as a target for the test setup and results.
If the second parameter, new_test_filename, is omitted the results are written directly to the original test
file.

The t (timeout) parameter is optional. If specified, this parameter causes RUNTST to stop running the
test after N seconds.

For example, heres how to run the Chariot test contained in a file named FILEXFER.TST, and write the
results back into that file:
RUNTST TESTS\FILEXFER.TST

The RUNTST program runs until the timing records are returned by all Endpoint 1 computers participating in
the test.
Use the FMTTST command to read the binary results data in a test file and produce a formatted listing.

112

Chariot User Guide, version 3.1

While it is running, RUNTST shows its progress by writing to stdout, so you can see what is going on. You
can stop a running test by pressing Ctrl+C or Ctrl+Break; RUNTST will ask you if you really want to exit. If
you answer with a Y, RUNTST directs the endpoints to stop the test. If the stopping seems to take excessively
long, press Ctrl+C or Ctrl+Break again to exit the program altogether. (However, calling it from normal batch
files on Windows NT works as you would expect.)
RUNTST does not poll endpoints, even if polling is defined in the test file.
If RUNTST reads a test file from an older version, it writes out its test setup and results in its current version.
For example, if you have a test that was created at version 1.x and you run the newest version of RUNTST, the
file thats written will be in the newest versionwhich cannot be read by older versions of RUNTST and
FMTTST. If youd like to continue using older versions of RUNTST, be sure to make an extra copy of the test
file (or write to a new_test_filename)so you still have a copy of your original.
The RUNTST and Chariot console programs can generally be loaded at the same time, but only one of them
can be running a test at a time. However, if youve changed your Reporting Ports on the Firewall Option tab
on the Change User Settings notebook to a value other than AUTO at the console, RUNTST cant run while
the console is loadedexpect to see message CHR0264 at RUNTST.
The RUNTST program is installed at the console.

FMTTSTFormatting Test Results


The command-line program named FMTTST lets you format your test results. It reads its input from a Chariot
test file. It can create output in three different forms: ASCII text, an HTML Web page, a CSV spreadsheet
file, or a WK3 spreadsheet file. When FMTTST writes spreadsheet output, it appends either a file extension of
WK3 or CSV to the name of the Chariot test file you specify.
Here is the syntax of the FMTTST command:
FMTTST tst_filename [output_filename] [-s | -h | -v [-c | -t template_name]]] [-q]

The tst_filename parameter is the name of the Chariot test file to be formatted. The output_filename is the file
that all test output is written. If no output_filename is supplied, output is directed to stdout. The template
name parameter represents a template containing print/export options created at the Chariot console. See
Working with Output Templates in the Operating the Console chapter for more information on working with
output template.

Using the Command-Line Programs

113

Here are the FMTTST command-line options:


FMTTST
option

FMTTST Option Description

-h

Creates HTML output. This flag controls the format of the output. If you use this flag, you cannot
use the s flag.

-v

Creates a comma separated output (with file extension .CSV). You can select which aspects of the
tests to export by specifying the specific CSV options described below. If you use this flag without
specifying CSV specific flags, the entire contents of the test are used to create the output. If you use
this flag, you cannot use the c or t flag to specify the print/options for the results.

-s

Creates spreadsheet output (with file extension .WK3). This flag controls the format of the output.
When you use this flag, the entire contents of the test are used to create the output. If you use this
flag, you cannot use the c or t flag to specify the print/options for the result.

-c

Generate the output according to the export configuration last used in the Chariot console. The c
switch exports the test to text format, or, in combination with the -h switch, to HTML, using the
custom configuration settings that were last selected at the Chariot console. This is useful in limiting
the output to the exact data you are interested in. This flag controls what print/export options to use
for the results. If you use this flag, you cannot use the s flag.

-t

Creates output based on the print/export options saved in an output template. Enter the name of the
output template after this parameter. This flag controls what print/export options to use for the
results. If you use this flag, you cannot use the c flag.

-q

Run in quiet mode. There is no confirmation for file overwrites.

To generate HTML output, supply the -h flag

To generate spreadsheet output in the CSV file format, supply the -v flag

To generate spreadsheet output in the WK3 file format, supply the -s flag

To generate ASCII text, supply neither -h or -s

To generate output based on the print/export options saved in an output template, supply the -t flag and the
name of the output template you want to use

For example, to see the formatted results for the Chariot file TESTS\FILEXFER.TST on the screena screen at
a timeenter:
FMTTST TESTS\FILEXFER.TST | more

You can also use FMTTST to create Web pages containing test results. The output files contain the HTML
tags needed for any Web browser that supports tables. Use the -h flag to generate HTML output. For
example:
FMTTST -h TESTS\FILEXFER.TST >C:\WEB\XFER1.HTM

Graphs are exported and linked to the HTML Web page with the GIF file format. GIF files are written to the
current directory. The following filenames are used for the GIFs.
Graph Type

Filename

throughput graph

<testname>_throughput.gif

transaction graph

<testname>_trans_rate.gif

response graph

<testname>_resp_time.gif

You can use FMTTST to create spreadsheet output to either the CSV file format or the WK3 file format.
These file formats can be read by modern spreadsheet programs, such as Excel or Lotus 1-2-3. Export to the
WK3 file format will not be supported in the next release of Chariot.

114

Chariot User Guide, version 3.1

When you export to the CSV file format, you can use the CSV flags to specify which aspects of the tests that
you want to export. If you do not set one of these flags, all aspects of the test are exported. These flags can
only be set when using the v flag.
Here are the FMTSTST CSV options:
CSV
option

CSV Option Description

-r

Provides a summary of any results and your run options.

-s

Provides information contained in the Test Setup tab of the Test Window

-d

Provides the timing records for the pairs in your test

To export all aspects of the test to CSV file format, enter:


FMTTST -v TESTS\FILEXFER.TST TESTS\FILEXFER.CSV

In this example, the spreadsheet output is written to file TESTS\FILEXFER.CSV.


To specify which aspects of the test to export to the CSV file format, enter the flag for the aspect you want to
export after the v flag. For example, to export the pair summary table, enter the following
FMTTST -v s TESTS\FILEXFER.TST TESTS\FILEXFER.CSV

See Export Options for CSV file on page 71 in the Working with the Console chapter for more information
on the CSV file format.
To export the test results to the WK3 file format, enter:
FMTTST -s TESTS\FILEXFER.TST

In this example, the spreadsheet output is written to file TESTS\FILEXFER.WK3.


The FMTTST program is installed at the console.

CLONETSTReplicating Pairs in a Test


The command-line program named CLONETST lets you build complex tests with very little work. See Using
CLONETST on page 138 in the Tips for Testing chapter for ways to use CLONETST to make test more
flexible.
CLONETST recognizes the total number of pairs based on the sequential numbering, not the actual number of
pairs. For example, you have 8 pairs, numbered 1-8, and you delete pair 5. If you attempt to clone pair 8,
CLONETST returns an error stating that the pair doesnt exist. CLONETST counts the number of actual
pairs, thus pair 8 to CLONETST is actually pair 7.
You start by building a test at the console, adding a few pairs, and saving that test to a file. The CLONETST
program takes two files as input:
1.

The test file you created at the console

2.

A text file containing a list of pair numbers and network addresses.

CLONETST reads the template pairs from your test file, and creates a third file. This third file is a Chariot
test file, created by replacing the network addresses in the first file with those it read from the second file.

Using the Command-Line Programs

115

Thus, the CLONETST program requires three parameters:

Original test filename (binary Chariot file)


Clone list (ASCII text file)
New test filename (binary Chariot file)

For example:
CLONETST input.tst clone.lst output.tst

You specify three items in each line of the second file (named CLONE.LST in this example):

The pair number to copy from the original test

The Endpoint 1 and 2 network addresses, used to replicate the original pair

These three items must be specified together on a line within the input file, separated by spaces. Blank lines in
this file are ignored.
The pair number is the sequential pair number in the list. If you have added or deleted pairs in the test, the
pair number may not match the sequential pair number.
You cannot use CLONETST to build tests with multicast groups. Use the console to build tests with multicast
groups.
Heres an example line in a Clone List file:
1

NewFromName

NewToName

Heres a more advanced example, showing how you might create a test with two endpoint pairs, but use
CLONETST to produce a test file with four endpoint pairs.
If the original test file contained three pairs, such as the following:
1
2
3

GANYMEDE.100
44.44.44.22
33.44.55.66

GANYMEDE.90
44.44.44.65
33.44.55.77

APPC
TCP
UDP

CREDITL.SCR
FILERCVL.SCR
FTPGET.SCR

etc...
etc...
etc...

APPC
APPC
TCP
APPC

CREDITL.SCR
CREDITL.SCR
FILERCVL.SCR
CREDITL.SCR

etc...
etc...
etc...
etc...

and the CLONE.LST file contained:


1
1
2
1

MYNET.A
MYNET.C
11.11.11.11
MYNET.E

MYNET.B
MYNET.D
22.22.22.22
MYNET.F

The resulting OUTPUT.TST file would look like:


1
2
3
4

MYNET.A
MYNET.C
11.11.11.11
MYNET.E

MYNET.B
MYNET.D
22.22.22.22
MYNET.F

You can see that weve used CLONETST to prune the UDP pair from the OUTPUT.TST file above. A clever
way to use CLONETST is to build an original file containing each of the different combinations of protocols
and scripts you plan to use. The CLONE.LST file lets you then create tests on-the-fly, assigning addresses to
pairs. You can omit the pairs you dont need for a given test.
The CLONETST program is installed at the console.

116

Chariot User Guide, version 3.1

FMTLOGFormatting Binary Error Logs


Whenever one of the console programs encounters a problem, it logs the problem information to an error log
file at the console. Similarly, whenever one of the endpoint programs encounters a problem it cant report to
the console, it logs that problem to an error log file at the endpoint.
CHARIOT logs problems to file CHARIOT.LOG, RUNTST logs to RUNTST.LOG; CLONETST logs to
CLONETST.LOG and endpoints log to ENDPOINT.LOG. The RUNTST and CLONETST error log at the console
are always written to the directory where Chariot is installed. To view an error log, you can use the Error Log
Viewer or the command-line program named FMTLOG. You can also use the Error Log Viewer to view
RUNTEST.LOG or CLONETST.LOG. See the Working with the Error Log Viewer section on page 96 for more
information.
Program FMTLOG reads from a binary log file, and writes its formatted output to stdout. Here is the syntax of
the FMTLOG command:
FMTLOG log_filename >output_file

The FMTLOG program is installed at the console and at the endpoints. See the Performance Endpoint manual
for additional information on running FMTLOG on the platform you are using.

Viewing the Results

117

Viewing the Results


This chapter discusses the results generated by running a test. It provides details on reading your results, as
well as background technical information about how Chariot generates its timing measurements.
See Print and Export Options in the Operating the Console chapter for information on printing and
exporting data.

Reading Your Test Results


Printed or exported results consist of three major sections:

The Summary, Run Options, and Test Setup Section on page 117 providing an overview of how the test
was set up, and when it was run.

The Test Totals Section on page 118 showing totals, average, minimum and maximum for throughput,
transaction rate, response time, and streaming results. The datagrams, endpoint configuration and raw
data totals are also shown.

A set of Confidence Intervals on page 128 showing detailed information about each endpoint pair and
showing all the timing records for that pair.

Summary, Run Options, and Test Setup Section


This section is broken down into four subsections: a test summary, the run options, and the test setup for the
connection between the endpoints and between the console and Endpoint 1 for each pair. In each of the
subsections, n/a is shown when a field is not present or does not apply.
Here is an example of this section, for a test with four endpoint pairs:
SUMMARY - H:\TESTS\EXPORT.TST
Chariot console version
3.1
Chariot console build level xxx
Chariot console product type Chariot
Filename
H:\tests\export.tst
Run start time
Thursday, October 01, 1998, 10:44:33 AM
Run end time
Thursday, October 01, 1998, 10:45:33 AM
Elapsed time
00:01:01
Number of pairs
4

This subsection shows information about when (if at all) the test was run, and how long that run took.
Run start time and Run send time mark the date and times when the test was started at the console and when
the test was ended; this doesnt count the time spent formatting results. The Elapsed time is the difference, in
seconds, between the start and end times.
In the throughput, transaction rate, and response time results for each pair, Chariot shows the Measured time.
The Measured Time is the sum of the times in all the timing records returned for that endpoint pair. Thus, the
Measured Time for any endpoint pair is always less than the total elapsed time for a test.

118

Chariot User Guide, version 3.1

RUN OPTIONS
End type
Duration
Reporting type
Automatically poll endpoints
Polling interval (minutes)
Stop run upon initialization failure
Connect timeout during test (minutes)
Collect endpoint CPU utilization
Validate data upon receipt
Use a new seed for random variables on every run
Datagram window size (bytes)
Datagram retransmission timeout (milliseconds)
Datagram number of retransmits before aborting
Receive Timeout (milliseconds)
Time to Live (Hops)

Run until any script completes


00:01:15
Real-time
Yes
1
Yes
0
No
No
Yes
1500
200
50
10000
1

This subsection shows the Run Options used for this test. If datagram protocols were used for any of the pairs,
the datagram options are also shown. See Changing the Default Run Options and Changing Your
Datagram Parameters in the Operating the Console chapter for detailed information on each of these values.
TEST SETUP (ENDPOINT 1 TO ENDPOINT 2)
Group/
Pair Endpoint 1
---- --------------------SPX
1
00000002:00a024cc3d29
2
00000002:00a024cc3d29
TCP
3
test13
4
test13

Network Service
Endpoint 2
Protocol Quality Script name
--------------------- -------- ------- ----------00000002:00a024cc3f55 SPX
00000002:00a024cc3f55 SPX

n/a
n/a

filesndl.scr
filesndl.scr

44.44.44.119
44.44.44.119

n/a
n/a

filesndl.scr
filesndl.scr

TCP
TCP

This section shows the test setup for each pair. The test setup includes the group, the endpoint pair number,
the Endpoint 1 and 2 network addresses, the network protocol, the service quality (if any), and the script used
by each endpoint pair. See Adding or Editing an Endpoint Pair in the Operating the Console chapter for
details on each of these fields.
TEST SETUP (CONSOLE TO ENDPOINT 1)
Group/
Pair
-----SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4

Console Knows
Endpoint 1
----------------------

Console
Console Service Pair
Protocol Quality Comment
-------- ------- -------

00000002:00a024cc3d29
00000002:00a024cc3d29

SPX
SPX

n/a
n/a

44.44.44.56
44.44.44.56

TCP
TCP

n/a
n/a

This section shows how the console connected to Endpoint 1, for each pair. The portion of test setup includes
the group, the endpoint pair number, the network address by which the console knows Endpoint 1, the network
protocol, the service quality (if any), and descriptive comment for this pair (if one was entered). See Adding
or Editing an Endpoint Pair in the Operating the Console chapter for details on each of these fields.

Test Totals Section


The next sections of the exported or printed output correspond with the tabbed sections of the Test window that
appear when you have results. The next three sections show the summary of results for the Throughput,
Transaction Rate, and Response Time. Following that is a summary of the Endpoint Configuration . The next
section is for the Raw Data Totals. The last section is optional in the exported or printed results; it is only
shown if you used a datagram protocol in your test.

Viewing the Results

119

If you are exporting pairs that are part of a multicast group, it may appear that the totals do not include an
aggregate of all the pairs. That is because for multicast, some of the data should only be counted once. For
example, if 5 pairs of a multicast group have a throughput of 5 Mbits/second, the actual effect on the network
is not 25 Mbits/sec, instead it is only 5 Mbits/second. This is because although the data is sent only once, 5
different endpoints received the data and reported the throughput.
Multicast pairs may be counted more than once in group totals if they appear in different groups based on the
grouping you have chosen, for example, Group by Endpoint 2. In these cases the multicast information may be
counted once for each of the group totals but will only be counted one time for the test total.
In the following sections, you are informed when multicast pairs are not counted individually in the totals.

Throughput, Transaction Rate, and Response Time


The three subsections of the exported or printed output show the summary of results for the Throughput,
Transaction Rate, and Response Time. At the console, these are separated into three separate tabbed areas.
Each of these three subsection are structured the same: the first column shows the group and pair numbers.
The next three columns show the Average, Minimum, and Maximum timing record calculations for each pair.
For the group and totals rows, the aggregate values are shown. For example, in the Average column, the group
row shows the aggregate of the averages of the values for the group; the totals row shows the aggregate of the
averages for all the pairs.
If the pair is a member of a multicast group, the throughput total includes only one pairs throughput value
(whichever pair had the highest).
If the pairs script is a streaming script, the transaction rate and response time is not applicable. Streaming
scripts do not report transaction rate or response time because it is a one-way data flow.
The next column shows the confidence interval for each pair. The last two columns are the same for each
subsection: the measured time and the relative precision. All of the values and their calculations are discussed
below in detail.
Heres an example of the Test Totals section:
THROUGHPUT
Throughput
Group/ Average
Pair
(Mbits/sec)
------ ----------SPX
3,419.343
Pair 1 1,667.644
Pair 2 1,751.669
TCP
6,209.364
Pair 3 3,076.313
Pair 4 3,133.052
Totals: 9,628.707

Throughput
Minimum
(Mbits/sec)
----------134.886
146.413
134.886
887.793
887.793
948,128
134.886

Throughput
Throughput
95%
Maximum
Confidence Measured
Relative
(Mbits/sec) Interval
Time
Precision
----------- ---------- ---------- ---------4,245.966
3,906.289
730.602
2.928
43.810
4,245.966
774.590
3.122
44.219
7,512.094
5,425.401
292.901
3.111
9.521
7,512.094
307.097
3.117
9.802
7,512.094

TRANSACTION RATE
Group/
Pair
------SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4
Totals:

Transaction
Rate
Average
----------35.014
17.077
17.937
63.583
31.501
32.082
98.597

Transaction
Rate
Minimum
----------1.381
1.499
1.381
9.091
9.091
9.709
1.381

Transaction
Rate
Maximum
-----------43.478
40.000
43.478
76.923
55.556
76.923
76.923

Transaction
Rate 95%
Confidence
Measured
Relative
Interval
Time
Precision
------------ ---------- --------7.481
7.932

2.928
3.122

43.810
44.219

2.999
3.145

3.111
3.117

9.521
9.802

120

Chariot User Guide, version 3.1

RESPONSE TIME
Group/
Pair
------SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4
Totals:

Response
Response
Response
Response Time 95%
Time
Time
Time
Confidence
Average
Minimum
Maximum
Interval
---------- ---------- ---------- ---------0.05716
0.02300
0.72400
0.05856
0.02500
0.66700
0.026
0.05575
0.02300
0.72400
0.025
0.03146
0.01300
0.11000
0.03174
0.01800
0.11000
0.003
0.03117
0.01300
0.10300
0.003
0.04431
0.01300
0.72400

Measured Relative
Time
Precision
--------- --------2.928
3.122

43.810
44.219

3.111
3.117

9.521
9.802

Here is a description of how Chariot calculates throughput transaction rate, and response time.
Throughput
The throughput is calculated with the following equation:
(Bytes_Sent + Bytes_Received_By_Endpoint_1) / (Throughput_Units) /
Measured_Time

If you are running a streaming script:


(Bytes_Received_By_Endpoint_2 / Throughput_Units) / Measured_Time

Heres how each of these variables are defined:

Bytes_Sent - the number of bytes sent by Endpoint 1 of a pair.

Bytes_Received - the number of bytes received by the endpoint of a pair.

Throughput_Units - the current throughput units value, in bytes per second. For example, if the
throughput units is KBps, the Throughput_Units number is 1024. In this example, the throughput units is
shown in the column heading as Mbits/sec, which is 125,000 bytes per second (that is, 1,000,000 bits
divided by 8 bits per byte). See Changing Your Throughput Units in the Operating the Console chapter.

Measured Time - the sum, in seconds, of all the timing record durations returned for the endpoint pair.

Transaction Rate
The calculations are shown in transactions per second. This is calculated as:
Transaction_Count / Measured_Time

Response Time
The response time is the inverse of the transaction rate. The calculations are shown in seconds per transaction.
This is calculated as:
Measured_Time / Transaction_Count

Here is a description of each of the columns in these three subsections:


Group/Pair
The group or the pair number
Average
The average for the result type.
Minimum
The minimum value for an individual timing record.

Viewing the Results

121

Maximum
The maximum value for an individual timing record.
95% Confidence Interval
Chariot has calculated that with 95% certainty, the real average is within the interval centered around the
Average value. For example,
THROUGHPUT
Throughput
Throughput Throughput Throughput
95%
Group/
Average
Minimum
Maximum Confidence Measured Relative
Pair (Mbits/sec) (Mbits/sec) (Mbits/sec)
Interval
Time Precision
----- ----------- ----------- ----------- ---------- -------- --------1
3.030
0.608
7.929
0.116
59.737
3.839

You can have 95% confidence that the real average throughput (if you were to run this transaction forever)
would be in the range 3.030 plus or minus 0.116 Mbits/sec. See Confidence Intervals on page 128 for
information about the calculation of confidence intervals.
It takes at least two timing records to calculate a Confidence Interval. If there is zero or only one timing record
for a pair, the Confidence Interval cannot be calculated, and n/a is shown instead.
Measured Time
The total of the measured times for all timing records produced by this pair. The value is shown in seconds.
Relative Precision
A statistical value indicating the consistency among the timing records for a pair. See Relative Precision on
page 129 for more information on the reliability of test results.
The Totals row summarizes the results for each column:
Average
The sum of all the pair averages (except for Response Time, which contains the average of the pair averages).
See Understanding Timing on page 129 for a discussion of how your aggregate throughput value can exceed
the capacity of the network on which youre running.
Minimum
The minimum of all the pair minimums.
Maximum
The maximum of all the pair maximums.

Jitter Data
When running a streaming script with the RTP protocol, the endpoints calculate the amount of jitter for each
timing record. Jitter is the statistical variance of the packet interarrival time. The jitter is measured for each
packet that is sent in a timing record. If only one packet is sent in a timing record, the jitter is zero. The jitter
is reset to zero at the beginning of each timing record. For more information on jitter, see Understanding
Jitter Measurements in the Working with Datagrams and Multimedia Support chapter on page 41.

122

Chariot User Guide, version 3.1

Here is an example of jitter data:


Jitter
Group/
Average
Minimum
Maximum
Pair
{millseconds) (millseconds) (millseconds)
------- ---------- ---------- ---------- ---------- --------- --------All Pairs
7.437
0.499
47.058
Pair 1
7.437
0.499
47.058
Totals:
7.437
0.499
47.058

In this example pair 1is part of a IP multicast group, pairs 3 and 4 are single unicast pairs running a streaming
script.
Group/Pair
Shows the group name and each pair within the group.
Average
Average jitter statistic of all timing records in the test.
Minimum
The lowest jitter statistic for an individual timing record in the test.
Maximum
The highest jitter statistic for an individual timing record in the test.

Lost Data
When running a streaming script, the endpoints keep track of lost data. Lost data is data that was discarded
and not received by the receiving endpoint. When Endpoint 1 has completed sending the data, it tells Endpoint
2 how much data was sent so the lost data totals can be calculated. It may be the case that the sender is so
much faster than the receiver that most or even all of the data is lost.
Here is an example of streaming data:
STREAMING DATA
Group/
Pair
-------My Group
Pair 1
Pair 2
Pair 3
No Group
Pair 4
Pair 5
Totals:

Bytes
Sent by
E1
--------1,404,000
1,404,000
1,404,000
1,404,000
1,000,000
500,000
500,000
2,404,000

Bytes
Received
E2
-------3,579,849
1,131,273
1,395,225
1,053,351
865,272
431,584
433,688
4,445,121

Bytes
Lost
E2
------632,151
272,727
8,775
350,649
134,728
68,416
66,312
766,879

%
Lost
E1->E2
------15.008
19.425
0.625
24.975
13.473
13.683
13.262
14.714

E1
Throughput
(KBytes/sec)
------------

Measured
Time
Relative
(secs)
Precision
-------- ---------

1,611.156
1,609.265
1,387.747

0.851
0.852
0.988

21.689
7.004
2.802

541.933
554.865

0.901
0.880

22.957
20.593

In this example pairs 1-3 are part of a IP multicast group, pairs 3 and 4 are single unicast pairs running a
streaming script.
Group/Pair
Shows the group name and each pair within the group.
Bytes Sent by E1
The count of the bytes of data send by Endpoint 1 in this pair. For the multicast group it is typical to see
the same value for all the pairs since the Endpoint 1 is sending the data once for all of the Endpoint 2s.
The value may be different if Endpoint 2 fails during the test or loses lots of data. Note that for the
multicast group, the total is not the aggregate total (since the data was sent only once).

Viewing the Results

123

Bytes Received E2
The count of the bytes of data received by Endpoint 2 in this pair. This is how much of the data was
successfully received by Endpoint 2.
Bytes Lost E2
The count of the bytes of data lost by Endpoint 2 in this pair. This value is Bytes sent by E1 - Bytes
received E2.
% Lost E1->E2
The percentage of the bytes of data lost by Endpoint 2 in this pair. This value is Bytes Lost E2 / Bytes
Sent by E1. Note that the aggregate value for this percentage uses an aggregate value of Bytes Sent by E1.
E1 Throughput
This is the throughput as viewed from Endpoint 1. This value may be greater than the value on the
throughput tab since it does not account for lost data. This value is calculated as follows:
(Bytes Sent by E1 / Throughput units) / Measured time

Measured Time
A total of the amount of time recorded by all of the timing records for this endpoint pair. This may differ
greatly from the amount of time the script was actually executing (that is, the elapsed time, which is shown
at the top of the results), depending on how much activity or SLEEPs the script performed outside of its
START_TIMER and END_TIMER commands. This value is shown in seconds.
Relative Precision
A statistical value indicating the consistency among the timing records for a pair. See Relative Precision
on page 129 for more information on the calculation of relative precision, and Understanding Timing on
page 129 for more information on the reliability of test results.

Endpoint Configuration
Following these summaries of the numerical results is a description of how the endpoints were configured
when the test was run. At the Chariot console, this Endpoint Configuration is shown in a separately-tabbed
area of the Test window. For example,
ENDPOINT CONFIGURATION
Group/
Pair
------SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4

E1
Operating
System
----------

E1
Chariot
Version
-------

E1
Build
Level
-----

E1
Product
Type
-------

E2
Operating
System
----------

E2
Chariot
Version
-------

E2
Build
Level
-----

E2
Product
Type
-------

Windows NT
Windows NT

2.2
2.2

xxx
xxx

Retail
Retail

Windows NT
Windows NT

2.2
2.2

xxx
xxx

Retail
Retail

Windows NT
Windows NT

2.2
2.2

xxx
xxx

Retail
Retail

Windows NT
Windows NT

2.2
2.2

xxx
xxx

Retail
Retail

The first column is again the group name or pair number. The next four columns describe Endpoint 1 for each
pair; the last four column describe Endpoint 2.
Operating System
The name of the operating system on which the endpoint is running.
Chariot Version
The Chariot version number for the endpoint. Newer versions have additional capabilities not available on
older versions.
Build Level
An exact identification of the internal build number used by Ganymede Software. This is helpful when
contacting us for service and support.

124

Chariot User Guide, version 3.1

Product Type
Chariot is available in several forms, which interact in different ways.

Raw Data Totals


Here is an example:
RAW DATA TOTALS
Number
of Bytes
Bytes
Group/ Timing Transaction Sent
Pair
Records Count
by E1
------- ------- ----------- ---------SPX
1
2
TCP
3
4
Totals:

219
111
108
224
112
112
443

219
111
108
224
112
112
443

21,900,000
11,100,000
10,800,000
22,400,000
11,200,000
11,200,000
44,300,000

Bytes
Measured
Received E1 CPU E2 CPU Time
Relative
by E1
Utiliz. Utiliz. (secs)
Precision
-------- ----------- ----------- -------219
111
108
224
112
112
443

39
55

55
39

9.8452. 2.172
9.841
3.744

55
39

39
55

9.830
9.882

4.232
2.722

Lets go through each of the columns:


Group/Pair
Shows the endpoint pair number of this row. This number corresponds to the pair number indicated in the
previous subsections.
Number of Records
A count of the number of timing records generated during the test by this endpoint pair.
Transaction Count
The number of transactions performed by this endpoint pair. Sometimes the transaction count equals the
number of records. If these two fields differ, it is because the script uses an
INCREMENT_TRANSACTION command within a LOOP to emulate multiple transactions per timing
record.
Bytes Sent by E1
A count of the bytes of data sent by Endpoint 1 in this pair. If the pair is a member of a multicast group,
the bytes sent by Endpoint 1 total includes only one pairs sent value.
Bytes Received by E1
A count of the bytes of data received by Endpoint 1 in this pair.
E1 CPU Utilization
Percentage of the CPU utilization for the duration of the test for Endpoint 1. This column is only shown if
the Collect Endpoint CPU Utilization checkbox on the Run Options dialog is selected.
E2 CPU Utilization
Percentage of the CPU utilization for the duration of the test for Endpoint 2. This column is only shown if
the Collect Endpoint CPU Utilization checkbox on the Run Options dialog is selected.
Measured Time
A total of the amount of time recorded by all of the timing records for this endpoint pair. This may differ
greatly from the amount of time the script was actually executing (that is, the elapsed time, which is shown
at the top of the results), depending on how much activity or SLEEPs the script performed outside of its
START_TIMER and END_TIMER commands. This value is shown in seconds.

Viewing the Results

125

Relative Precision
A statistical value indicating the consistency among the timing records for a pair. This is the same
number shown in the summaries for the throughput, transaction rate, and response time. See Relative
Precision on page 129 for more information on the calculation of relative precision.

Endpoint Pair Details


Detailed information is shown for each endpoint pair including its configuration, its script, its script variables,
and the individual timing records.

Endpoint Pair Configuration


Endpoint Pair Script
Endpoint Pair Variables
Endpoint Configuration Details
Endpoint Pair Timing Records

Endpoint Pair Configuration


Here is an example of the endpoint pair configuration section:
GROUP: SPX / PAIR: 1
Endpoint 1
Endpoint 2
Network protocol
Service quality
Script name
Pair Comment
Console Knows Endpoint 1
Console Protocol
Console Service Quality

00000002:006008Bf5ff8
00000002:006008168423
SPX
n/a
filesndl.scr
00000002:006008bf5ff8
SPX
n/a

The details for each configured endpoint pair are shown in their own section. The first part of each Endpoint
Pair section shows a number of configuration parameters and information about what happened during a run.
These are:
Endpoint 1
The Endpoint 1 network address used by this endpoint pair.
Endpoint 2
The Endpoint 2 network address used by this endpoint pair.
Network Protocol
The protocol used between the two endpoints.
Service Quality
The service quality used between the two endpoints; n/a if not used by this protocol.
Script Name
The filename of the script used between this pair of endpoints.
Pair Comment
The endpoint pair comment (an optional field).
Console Knows Endpoint 1
The configured name used by the console to contact the first endpoint. Depending upon the network
configuration and computer setup, this may be different from the Endpoint 1 field.

126

Chariot User Guide, version 3.1

Console Protocol
The network protocol used between the console and the first endpoint.
Console Service Quality
The service quality used between the console and the first endpoint; n/a if not used by this protocol.

Endpoint Pair Script


This section of the results shows the script that was used. Here is an example listing:
Script
filercvl.scr, version 2.x -- File Receive, Long Connection
Endpoint 1
----------------------------------------SLEEP
initial_delay=0
CONNECT_INITIATE
port_number=AUTO
LOOP
number_of_timing_records=100
START_TIMER
LOOP
transactions_per_record=1
SEND
file_size=100000
send_buffer_size=DEFAULT
send_datatype=NOCOMPRESS
send_data_rate=UNLIMITED
CONFIRM_REQUEST
INCREMENT_TRANSACTION
END_LOOP
END_TIMER
SLEEP
transaction_delay=0
END_LOOP
DISCONNECT

Endpoint 2
---------------------------------

CONNECT_ACCEPT
port_number=AUTO
LOOP
number_of_timing_records=100
LOOP
transactions_per_record=1
RECEIVE
file_size=100000
receive_buffer_size=DEFAULT
CONFIRM_ACKNOWLEDGE
END_LOOP

END_LOOP
DISCONNECT

Endpoint Pair Variables


This section of the results shows a summary of the variable settings used by this script. Here is an example
listing:
Variable Name
Value Description
------------------------ ---------- ------------------------------------initial_delay
0 Pause before the first transaction
number_of_timing_records
100 How many timing records to generate
transactions_per_record
1 Transactions per timing record
file_size
100000 How many bytes in the transferred
file
send_buffer_size
DEFAULT How many bytes of data in each SEND
receive_buffer_size
DEFAULT How many bytes of data in each
RECEIVE
transaction_delay
0 Milliseconds to pause
send_datatype
NOCOMPRESS What type of data to send
send_data_rate
UNLIMITED How fast to send data
port_number
AUTO What port to use between endpoints

Viewing the Results

127

Send and receive buffers can be set to the value DEFAULT. This tells an endpoint to use buffers that are the
default size for the network protocol being used. DEFAULT lets you use the default buffer size for each
protocol, without having to modify the script to handle protocol differences. The default value is different
depending on the protocol and platform being used. Chariot uses the most common value for each particular
environment.

Endpoint Configuration Details


This section shows extensive details about the endpoint programs, and the operating systems and protocol
stacks they are using. This information will differ among operating systems and endpoint versions. Here is an
example listing (just the Endpoint 1 portion is shown here), for a Windows NT endpoint running all five
protocols.
Endpoint 1 Type
------------------------E1 Chariot Version
E1 Build Level
E1 Product Type
E1 Operating System
E1 CPU Utilization
OS Version (major)
OS Version (minor)
OS Build Number
CSD Version
Memory
APPC Default Send Size
IPX Default Send Size
SPX Default Send Size
TCP Default Send Size
UDP Default Send Size
RTP Default Send Size
APPC Protocol Stack Name
APPC Protocol API Version
WinSock API
WinSock Stack Version
WinSock API Version Used

Endpoint 1 Value
-------------------2.2
xxx
Retail
Windows NT
Supported
4
0
1381
Service Pack 3
1304848 (KB)
32763
1391
32767
32767
8183
8180
IBM PCOMM or Communications Server
1.0
Microsoft
2.2
2.2

The endpoints return their default Send sizes to the console.

Endpoint Pair Timing Records


The next section shows all of the timing records generated by this endpoint pair during the run. These are the
raw results used to create the statistics for this endpoint pair that are shown in the test totals section.
Here is an example listing:
Timing Records
Transaction Count: 1, Bytes Sent by E1: 100, Bytes Received by E1: 100,000
Record
Number
-----1
2
3

Elapsed Measured Inactive


Time
Time
Time
Throughput
(sec)
(sec)
(sec)
(Mbits/sec)
-------- -------- ---------- ----------0.062
0.058
1,683.745
0.114
0.052
1,878.024
0.162
0.047
2,077.813

Trans.
Rate
(#/sec)
------17.241
19.231
21.277

Response
Time (sec)
---------0.05800
0.05200
0.04700

128

Char iot User Guid e, ver sion 3.1

Technical Details
This section discusses how Chariot calculates confidence intervals, determines the relative precision, and
generates timing records.

Confidence Intervals
Here is a formal definition of a confidence interval using statistical terms.
A confidence interval is an estimated range of values with a given high probability of covering the true
population value.
This is a quantification of the fact that Chariot is doing a sampling of the real (infinite) set of measurements.
If it could sample all of the possible measurements of a network (with infinite time and resources), it would be
100% sure that the calculated average is the correct value. Since Chariot always generates a smaller-thaninfinite set of measurements, there is going to be some doubt about whether it has really calculated an average
close to the real average.
To state the definition another way, there is a 95% chance that the actual average lies between the lower and
upper bound indicated by the 95% confidence interval.
Here is how the confidence interval is calculated:
1. Chariot first calculates the standard deviation of the measured time of the timing records.
2. It then calculates the standard error, which is the standard deviation divided by the square root of the
number of timing records minus one.
3. It then uses a statistical table to look up a "t" value, using the number of timing records minus one.
4. The confidence delta is the "t" value times the standard error.
5. This is a confidence interval for the average measured time, which is used to display confidence intervals
for each of the calculation types (throughput, transaction rate, and response time).
The effect of the value "t" is such that the larger the sample size, the smaller the confidence interval, all things
being equal. Thus, one way to shrink the confidence interval is to have the pair generate more timing records.
You may sometimes see a negative number in the lower bound of a 95% confidence interval. The statistical
calculations being used assume an unbounded normal distribution, which could contain negative samples. We
know you cant get negative numbers in real life (communications never goes faster than the speed of light).
Thus, when you get a negative number on the left-hand side of your confidence interval, you should have very
low confidence in the results of the test.
See How Long Should a Performance Test Run on page 142 in the Tips for Testing chapter for further
discussion of how to set up tests to obtain reliable results.

Viewing the Results

129

Relative Precision
The confidence interval is a well-known statistical measurement for the reliability of the calculated average.
Unfortunately, it does not provide a good mechanism to compare tests using different scripts that do different
things; you cant easily compare the reliability of two tests by looking at the confidence interval of a file
transfer script and the confidence interval of an inquiry script.
The Relative Precision is a gauge of how reliable the results are for this particular endpoint pair. Regardless of
what type of script was run, you can compare their relative precision values.
The relative precision is obtained by calculating the 95% confidence interval of the Measured Time for each
timing record, and dividing it by the average Measured Time. This number is then converted to a percentage
by multiplying it by 100. Thus, the lower the Relative Precision value, the more reliable the result. A good
Relative Precision value is 10.00 or less. On an empty LAN, you can get Relative Precision values of less than
1.00 on many tests. See How Long Should a Performance Test Run on page 142 in the Tips for Testing
chapter for further discussion of how to set up tests to obtain reliable results. See Confidence Intervals on
page 128 for information about the calculation of confidence intervals.
Chariot maintains its internal numerical values with more significant digits than those shown in the results. If
you were to calculate values like Relative Precision from the other numbers shown in the results, your
calculation may differ slightly from the numbers displayed by Chariot.

Understanding Timing
Three timing values show up throughout the results. Here are details of elapsed time measured time and
inactive time.
Elapsed Time
When referring to an entire test, the Elapsed Time is the duration from when the pairs started running
until they stopped. This value appears at the top of the printed and exported results, as well as in the
status bar at the bottom of a Test window.
When referring to a single timing record, the Elapsed Time is the time at the endpoint when the timing
record was cut. Time 0 is again when all the pairs completed initialization and its Run Status went to
Running. This value appears in the Timing Records Details.
Measured Time
When looking the results for a pair, the Measured Time is the sum of the times in all the timing records
returned for that endpoint pair. This may be less than the amount of time the script was actually executing
(that is, the elapsed time), depending on how much activity or SLEEPs the script performed outside of its
START_TIMER and END_TIMER commands. This value appears in the Throughput, Transaction Rate,
and Response Time results.
When referring to a single timing record, the Measured Time is the time measured between the
START_TIMER and END_TIMER commands in a script. This value appears in the Timing Records
Details.

130

Chariot User Guide, version 3.1

Inactive Time
The Inactive Time is the time spent outside the START_TIMER and END_TIMER commands in a script.
This is time when an endpoint isnt doing work thats being measured. If this inactive time is more than
25 ms between timing records, it is shown; otherwise, inactive time is shown as blank for times below this
threshold of 25 ms. The inactive time for each timing record is shown in the Timing Records Details. See
Graphs and Timings on page 130 for more information.
The clock timers used when timing scripts are generally accurate to 1 millisecond, although this accuracy
depends on the endpoint operating system. For most tests, this is more than sufficient. If the transactions in a
test are too short, this timing accuracy can cause problems.
If the measured time of timing records drops, the resolution of 1 millisecond becomes a higher percentage of
the actual measured time. For example, if the measured time is 5 milliseconds, there is a +-10% potential for
error, since the actual time is somewhere between 4.5 and 5.5 milliseconds.
For repeatability and confidence, use:

long timing record durations, or


many timing records.

See How Long Should a Performance Test Run on page 142 in the Tips for Testing chapter for information
about deciding how long to run a test and how long to make each timing record.

Graphs and Timings


You may have noticed that a throughput line graph sometimes looks like a staircase, or it drops to zero. This
behavior is the graphical representation of a pair with a non-trivial amount of inactive time.
To illustrate, lets walk through an example.
The endpoint follows a script (such as CREDITL) and measures how long it takes to perform the commands in
the script. For a script with default settings, the endpoint runs the script as fast as possible; its sending and
receiving data over the network almost all the time.
Change the transaction_delay script variable to say, 1000 ms, and the endpoint doesnt use the network for that
period of time. For those 1000 ms, when the endpoint is inactive, throughput is zero.
Even for scripts with transaction_delay set to 0, its possible for the endpoint to be inactive (not performing the
script). Typically, this can occur on a computer with many endpoint threads active or other applications
running concurrently. Since an endpoint doesnt own the computer, it has to share it with the other
applications. Consequently, it can take a measurable amount of time for the endpoint just to finish one timing
record and start the next.
You can see the amount of inactive time by viewing the Timing Record Details. If the endpoint is inactive for
more than 25 ms between timing records, the graph drops to zero, to show this inactivity. This will appear to
you as a staircase effect.

Viewing the Results

131

How Inactive Time Affects Aggregate Values


Tests with non-zero inactive time can produce inaccurate and misleading values in the aggregate fields,
because of the way Chariot calculates timing records and aggregate values.
To understand why this is the case, lets consider a test with two pairs running over a 10 Mbps Ethernet. If we
ran each pair alone, we could get, lets say, 8 Mbps. Now, lets construct a test with our two pairs, such that
each pair spends half of its time sleeping outside of the timing record loop and half of its time sending data.
Due to competition for the Ethernet, well find that one pair often sends while the other pair sleeps,
exchanging roles frequently. Since the SLEEP time is not counted in the throughput calculations, both pairs
report that they achieved about 8 Mbps, resulting in an aggregate throughput of 16 Mbps on a 10 Mbps
Ethernet! The aggregate occurs across the sum of the measured times, not the elapsed time.
To detect this situation, watch for spots on the line graph where a pairs throughput drops to zero, or examine
the Inactive Time column in the Timing Records Detail.
To help avoid this situation, only use non-zero SLEEPs inside the timing record loop (that is, within a
START_TIMER and END_TIMER).

132

Chariot User Guide, version 3.1

Tips for Testing

Tips for Testing


Here are some performance testing tips and techniques we think youll find helpful.
As you become more experienced with Chariot, youll uncover your own ways to use this software more
productively. Contact us with what you learn; well share your lessons with other users via the Web and
in upcoming releases.

Simplifying Network Configuration


Installing and configuring Chariot is simple if you already have the underlying network software running.
Chariot defines fields like TP names and ports dynamically. Even so, there are several things you can
configure to make your testing easier and more effective.

Performance Tuning for TCP/IP


Here are some tips for changing how Chariot measures performance for RTP, TCP, and UDP tests.

Setting the TCP Receive Window


Changing the value of the TCP Receive Window parameter at an endpoint can affect the results you see
when testing for maximum throughput. Note that this is a TCP/IP stack configuration parameter. Many
TCP/IP stacks ship with a default value of 8 KBytes. Changing to a larger value changes performance:
increasing throughput on some stacks and reducing it on others. We recommend experimenting with the
values you use for this parameter.
For example in Windows NT, TcpWindowSize is a registry value thats not already present. To set it, go
to the Windows NT endpoint and run program REGEDT32.EXE. Navigate to:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Tcpip\Parameter
s

Go to Edit, Add value, and add TcpWindowSize as a REG_DWORD. The maximum value available is
64K. Matching the TcpWindowSize to the underlying MTU size minus the IP header should improve
efficiency. This means multiples of 1460, 1457, or 1452, depending on Ethernet implementation.
Correspondingly, in the application script change the send_buffer_size and receive_buffer_size to the
registry value of TcpWindowSize. Weve seen 40 times 1460 (that is, 58,400 bytes) give the best
throughput measurements.

133

134

Chariot User Guide, version 3.1

Choosing a Port Number


Chariot lets you specify the port number to use for connections between endpoints. Check or uncheck
the AUTO checkbox when editing the port_number script variable,

check AUTO to let the endpoints assign the port number automatically.
uncheck AUTO to let you specify a port number.

Automatic assignment gives the best performance. AUTO is the preferred choice when testing with
multiple pairs. Otherwise, if the same port is specified for multiple pairs, the performance degrades, since
the pairs must share (serialize) the use of the port to run the test.
Uncheck AUTO, and enter a port number between 1 and 65535 if you are trying to emulate a specific
application. Specific port numbers are useful when testing devices that filter or prioritize traffic based on
their port number, such as firewalls or layer 3 switches. We recommend saving the script file with a new
name when you specify port numbersso you can easily reuse it with other pairs or in other tests.
Some restrictions:

Only one pair at a time can use a port number for each datagram protocol (that is, RTP, IPX, or
UDP). For example, only one pair at a time can use UDP with port number 1234 between an
endpoint pair; however, another pair can be using IPX with port number 1234.

Some endpoints allow only one pair at a time with the same port number. This restriction is based on
their internal tasking structure. These endpoints are HP-UX, Linux, and MVS. Expect to see
message CHR0264 if you attempt to use the same port number more than once in a test with one of
these endpoints.

RFC 1700 lists the registered port numbers (on the Web, see ftp://ftp.isi.edu/in-notes/rfc1700.txt). Here
are the categories of port numbers:
1 to 1023
1024
1025 to 5000
5001 to 65535

Reserved for well-known services (e.g., FTP)


Reserved by IANA
Typical range for user-defined services
Typical range for server software

See Testing Through Firewalls for detailed information on setting the port numbers when using
firewalls.

Eliminating DNS Latency from Chariot Test Results


When running tests where you use the hostname to define Endpoint 1 and Endpoint 2, and the
CONNECT command is inside the test scripts timing loop (such as in short connections), Chariot must
translate the hostname to an IP address. If Chariot cannot do this task during the test setup, Chariot
includes the latency required for the endpoints to translate the hostname in the response time data.
This gives you an inaccurate response time number for the test. Sometimes Chariot is unable to
translate both hostnames to IP addresses.

Tips for Testing

In some tests, the amount of DNS latency required to translate hostnames can be significant. For
example on Windows NT, the following search can contribute to the time required to resolve a hostname:
1.

The endpoint inspects its own hostname.

2.

The HOSTS file is inspected on the local computer.

3.

The DNS Server is queried.

4.

The NetBIOS name cache on the local computer is queried.

5.

The WINS server is queried.

6.

B-Node broadcasts are made from the local computer.

7.

The LMHOSTS file is consulted on the local computer.

To omit DNS latency from your response-time results, use numerical IP addresses. For example, enter a
value like 10.10.10.88 in the Endpoint fields on the Add a Pair dialog.

Performance Tuning for APPC


Here are some tips for changing how Chariot measures performance for APPC tests.

Defining Modes for Large APPC Tests


A common problem when running large APPC tests is the mode-session limits. The mode definitions
shipped with most APPC software (such as #BATCH and #INTER) are defined with a maximum of 8
sessions between any two LUs. This means you cant readily create Chariot tests with more than 8
identical pairs (using APPC, the same endpoint, and the same mode).
If you want to have more than 8 APPC sessions between two systems, you can do one of the following:

use several different modes,

define multiple LUs at each system, or

increase the number of sessions allowed for a mode.

The first two are error prone and can get confusing. The best and simplest solution is to define new
modes at any system that you will use for large APPC tests. By doing this, you have considerably more
flexibility in creating large tests.
Each APPC software stack provides a different mechanism for defining modes. Here is a snippet from an
NDF file in IBMs CM/2 that defines two modes with session limits of 1000. They define new modes
named BIGINTER and BIGBATCH, each with a session limit of 1000. They use the #INTER and
#BATCH classes of service so that you can still test the route calculation and priority differences in an
APPN or High Performance Routing (HPR) network.
DEFINE_MODE mode_name(BIGINTER)
max_ru_size_upper_bound(16384)
cos_name(#INTER)
min_conwinners_source(500)
plu_mode_session_limit(1000);
DEFINE_MODE mode_name(BIGBATCH)
max_ru_size_upper_bound(16384)
cos_name(#BATCH)
min_conwinners_source(500)
plu_mode_session_limit(1000);

135

136

Chariot User Guide, version 3.1

Microsofts SNA Server comes pre-installed with a basic set of modes for APPC use. To get to the panel
in the SNA Server Admin where modes are defined, choose any local or remote LU, press the Partners...
button, then press the Modes... button. IBMs Personal Communications and Communications Server
software both come pre-installed with a basic set of modes for APPC use. To get to the panel where
modes are defined start the SNA Node Configuration program, select Advanced (if applicable) and select
Configure Modes. As an additional note, Communications Servers mode definitions for #INTER and
#BATCH are pre-installed with session limits of 8192 versus the normal 8.

Bidding for APPC Sessions


Perhaps a better title for this section would be Avoid Bidding for Sessions. This little-known function
of APPC can have a major effect on your performance tests. For each APPC session between two LUs,
one LU is the Contention Winner (the ConWinner) and the other is the Contention Loser (the
ConLoser). If the ConWinner wants to use a session, it can start sending data immediately. If the
ConLoser wants to use a session, it has to send a Bid to the ConWinner and wait for permission to use
the session. This means that every APPC conversation on a ConLoser session requires an extra flow in
each direction.
The worst part about bidding for sessions is that APPC, in an effort to simplify networking, hides the fact
that it is occurring. Only by reading a trace or using a network analyzer can you tell that it occurs. We
recommend that you avoid this completely, by defining modes that provide enough ConWinners. In the
mode definitions shown above, each mode can have up to 1000 sessions, with 500 ConWinners in each
direction.

Using APPN COS for Network Node Testing


An interesting test for those evaluating the performance of APPN or HPR is to compare the performance
of a direct LAN connection to intermediate routing. This gives you a good idea of the effect APPN
network nodes have on response time and throughput. Using APPN class-of-service (COS), you can make
this test simple to set up and execute.
Define each APPN end node to have a connection network and a link to a network node. For most
communications between two end nodes, the network node chooses the direct link between the two
systems as the optimal path.
The trick is to define the links to the network node as secure. When you run a test that uses non-secure
modes, such as #INTER or #BATCH, the network node provides a route using the connection network, a
direct link between the two endpoints. When you use a mode that requires secure links, such as
#INTERSC or #BATCHSC, the network node cant use the connection network because it is defined as an
unsecured link. This means the network node returns a route going from one endpoint, to the network
node, to the other endpoint.

Tips for Testing

137

Here is an example from an IBM CM/2 .NDF file which defines a link to a network node and specifies that
the link is secure:
DEFINE_LOGICAL_LINK

LINK_NAME(SECLINK)
ADJACENT_NODE_TYPE(NN)
PREFERRED_NN_SERVER(YES)
DLC_NAME(ETHERAND)
ADAPTER_NUMBER(0)
DESTINATION_ADDRESS(X40E650588C92)
ETHERNET_FORMAT(NO)
CP_CP_SESSION_SUPPORT(YES)
SOLICIT_SSCP_SESSION(NO)
ACTIVATE_AT_STARTUP(YES)
LIMITED_RESOURCE(NO)
SECURITY(GUARDED_RADIATION);

Using this trick means that you can test the different paths through the network by changing the mode
specified in the Chariot test. This is much simpler than reconfiguring your test network.

Consider Deactivating HPR, on Windows 95, 98, and NT


You may want to consider deactivating High Performance Routing (HPR) support when using IBMs
APPC software for Windows 95, 98, and NT. For example, using Windows 95 on a Compaq Pentium 75
with 16 MBytes of RAM, we saw 0.1 Mbps throughput when using HPR, as opposed to 4.0 Mbps
throughput with HPR deactivated.
There are cases, especially on slower computers with limited memory, where using HPR routing instead of
standard APPN routing degrades performance significantly. For links that are explicitly configured using
the SNA Node Configuration program, users have the choice of using HPR or not when configuring the
link. However, implicit connections (that is, inbound dynamic connections from other computers) and
connections that use a Connection Network link will always attempt to use HPR.
For both implicit connections and Connection Network connections, the use of HPR can be overridden by
changing an option in the .ACG configuration file. The IMPLICIT_HPR_SUPPORT=1 line of the
PORT=() stanza can simply be changed to IMPLICIT_HPR_SUPPORT=0.
Version 4.1 and Version 4.11 of Personal Communications for Windows 95 and NT use a binary .PCG
configuration file instead of the newer .ACG files. On these platforms, you can download and use the
IBM-supplied unsupported file converter PCSACG.ZIP from the Ganymede Software Web site under
Support. Select the Operating System and Protocol Stack Fixes and Tips link and then select the
Deactivating HPR with IBM Communication Server with PCCOMM link in the Windows 95 section. To
convert the binary .PCG file into the ASCII .ACG file, make the necessary change, then use the same tool
to convert the file back into the binary version.

Making Tests More Flexible


Youll find that you frequently want to create just one test file that you can run using different computers,
without having to edit each endpoint pair in the test. There are two different mechanisms to help you do
this: using aliases and running CLONETST.

Using Aliases
The TCP/IP and APPC protocols already provide a mechanism for defining aliases or nicknames. You
can create your own alias directory in Chariot for the IPX/SPX protocols.

138

Chariot User Guide, version 3.1

Chariot lets you take advantage of these aliases. For example, create a Chariot test using the network
addresses of TEST1 and TEST2. The console resolves these nicknames before starting a test. Using
this scheme, you can create tests that dont include any real LU names, IP addresses, or IPX addresses
just aliases. By changing the alias, you change which systems will run the test.
APPC
You can define a Partner LU Alias for each LU. If you define LUs in the network to have aliases of
TEST1 and TEST2, then the console resolves the aliases and can run the test.
TCP/IP
You can define nicknames for IP addresses by defining them in the file named HOSTS, in your ETC
directory. The following example illustrates creating an alias for TEST1 and TEST2:
44.44.44.60
44.44.44.62

TEST1
TEST2

IPX/SPX
Chariot stores aliases for IPX addresses in file SPXDIR.DAT (in the directory where you installed
Chariot). See Working with IPX/SPX Entries on page 56 in The Main Window section for
information on adding entries and making changes to this file.
44.44.44.101
44.44.44.103
44.44.44.104

00000002:006097c3f512
00000002:00a0247b58de
34242342:000000000001

Using CLONETST
Another option for changing network addresses is to use the CLONETST command-line program. It lets
you use one Chariot test file as a template for creating another. If you have a small number of tests that
you run, you can combine the different scripts, variables, and protocols in a master test file. Using this
file as input to CLONETST, you can easily create large tests, without using the console. You can even
create new tests programmatically by creating CLONETST input files and then executing CLONETST
from within your code. With batch programs and a command interpreter like Perl or REXX, you can
automatically create a flexible collection of tests. See CLONETSTReplicating Pairs in a Test on page
114 in the Using the Command Line Programs chapter for more on this powerful command.

Testing Through Firewalls


You can use Chariot to test through firewalls. The way you configure Chariot depends on the location of
the firewall in your network. Chariot provides configuration options for:

Console through Firewall to Endpoint 1

Endpoint 1 through Firewall to Endpoint 2

These options are located in the Firewall Options tab on the User Settings notebook. See Changing Your
Firewall Options on page 54 in the Operating the Console chapter for more information.

Console through Firewall to Endpoint 1


Use this setting to configure the console when your firewall is located between the Chariot console and
Endpoint 1. The firewall must be configured to allow communication from the console to the Endpoint 1
port 10115 (TCP) and/or port 10117 (SPX). The firewall must also be configured to allow
communication from Endpoint 1 to the console over a port specified in the Firewall Options tab of the

Tips for Testing

139

User Settings notebook. Select a port number that passes the firewall port filtering. Endpoint 1 uses the
port number configured when returning test results.
Chariot Flow

Port Number Used

Console to Endpoint 1 test setup using TCP

10115

Console to Endpoint 1 test setup using SPX

10117

Endpoint 1 to Console test results

Port number specified in Firewall Options

Endpoint 1 through Firewall to Endpoint 2


The most common network configuration positions a firewall between Endpoint 1 and Endpoint 2. In this
case you need to allow two types of Chariot data to pass through the firewall: test setup data and test data.

Port Specification
There are three port specifications to pass through your firewall:
Test Setup Data
Chariot test setup data is sent from Endpoint 1 to Endpoint 2 using port 10115 for tests using the
TCP, RTP, or UDP protocol or port 10117 for tests using the SPX or IPX protocol.
Test Data
You typically need to configure a specific destination port number for Endpoint 2 in the application
script. Select a port number that passes the firewall port-filtering criteria.
Streaming Results
If your test is running a streaming script, the results are sent from Endpoint 2 to Endpoint 1 using
port 10115 (TCP) or port 10117 (SPX). Even though streaming scripts must use a datagram protocol,
the results are always sent back over TCP (for UDP or RTP) or SPX (for IPX).
In summary, the following ports must be allowed through the firewall:
Chariot Flow

Port Number Used

Endpoint 1 to Endpoint 2 test setup using TCP

10115

Endpoint 1 to Endpoint 2 test setup using SPX

10117

Endpoint 1 to Endpoint 2 test execution

Port number specified within the script

Endpoint 2 to Endpoint 1 streaming scripts using TCP

10115

Endpoint 2 to Endpoint 1 streaming scripts using SPX

10117

Data Correlation
When you specify a port number in the application script, Endpoint 2 needs to be able to determine which
application script to process. For example, when multiple pairs use the same port number in the
application script, only one Endpoint 2 CONNECT_ACCEPT command can be outstanding at a time.
Endpoint 2 needs to determine which script Endpoint 1 is processing, so it must correlate the connection
data with an application script. This is only true for TCP and SPX protocols, datagram protocols are
restricted to one pair per unique port.

140

Chariot User Guide, version 3.1

There are two settings in the Firewall Options for data correlation.
Use Endpoint 1 identifier in data
This is the default setting and it uses a four-byte correlator in the first data flow of every connection
(from Endpoint 1 to Endpoint 2). After Endpoint 2 receives the data, it determines the correct script
to execute. This option works for most firewalls. Always use this option if your firewall also
provides network address translation (NAT).
Use Endpoint 1 fixed port
This setting is only valid for TCP connections. Instead of sending a four-byte correlator, the
Endpoint 1 source port and address are used for correlation. This requires Endpoint 1 to use the same
source port for every connection in the test. For this reason, this is not a good option for NAT,
because the address and port number are typically translated. For short connections, this may cause
degradation of performance on some systems, since Endpoint 1 must reuse the same port and wait for
the TCP/IP stack to free the port. Use this option when your firewall acts as an application-level
firewall, that is, it inspects the actual data payload.
Some operating systems do not work with fixed source ports and always use the four-byte correlator.
These are:

Solaris Version 2.4


Linux
Windows 3.1
MVS

For firewalls that inspect the data payload, specify the .CMP file to use in the application script (see
Creating Your Own User Data Files for detailed information). This file should contain the data
required to pass the specific filter, for example, an HTTP GET request. Chariot needs to know the exact
file size when creating a .CMP file. As Chariot sends a file of any size, it is important to specify the size
to send exactly on the byte boundary to avoid wrapping the data. Each SEND command grabs the first N
number of bytes. If the file is longer than N, the pointer is not reset to the beginning of the file. It
continues until the end of the file marker is reached and then it starts over at the beginning. If this is not
configured properly, your firewall sees data that looks incorrect.

HTTP GET Example


In this example, two files (USER01.CMP and USER02.CMP) are set with an actual sample of data from a
network trace.

File USER01.CMP contains the HTTP GET command (assume 404 bytes)
GET http://www.ganymede.com/support/chariot_technical_questions.htm
HTTP/1.0
If-Modified-Since: Tuesday, 07-Apr-98 20:39:41 GMT; length=16755
Referer:

http://www.ganymede.com/support/chariot_technical_questions.htm
Proxy-connection: Keep-Alive
User-Agent: Mozilla/4.05 [en] (WinNT; I)
Pragma: no-cache
Host: www.ganymede.com
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, image/png, */*
Accept-Language: en
Accept-Charset: iso-8859-1,*,utf-8

Tips for Testing

File USER02.CMP contains the HTTP server response and the actual file (assume 1020 bytes)
HTTP/1.1 200 OK
Server: Microsoft-IIS/4.0
Date: Tue, 21 Jul 1998 22:11:49 GMT
Content-Type: text/html
Accept-Ranges: bytes
Last-Modified: Tue, 07 Apr 1998 20:39:41 GMT
ETag: "92274d4a6562bd1:6457"
Content-Lenth
"Text file ----"

Configuration steps:
1.

Open firewall port 10115. Usually a custom-defined protocol.

2.

Create a pair and specify the script: HTTPTEXT.SCR

3.

Edit the script and change the following variables:

4.

port_number: 80
size_of_reocrd_to_send: 404

control_datatype: USER01.CMP
file_size: 1020
send_datatype: USER02.CMP
Run the test

Designing Chariot Performance Tests


Building a performance test with Chariot is simple. This section offers suggestions for creating effective,
repeatable performance tests.
In general, always choose the following options for your performance tests:

Report timings using Batch


The alternative to Batch, Real-time reporting, causes timing records to flow across the same network
youre measuring. This can really perturb whats being measuredpotentially changing your results
by several hundred percent.

Run for a fixed duration, or Run until any pair completes


These Run Options cause all the pairs to end at the same time. Otherwise, your results get skewed as
some pairs keep running (thus getting more bandwidth) while others have completed.

Dont use an endpoint in the same computer as the console


You dont want the endpoint and console to be competing for CPU cycles or for access to the protocol
stacks. . If the console computer must be an endpoint, use runtst.ext to run the test from a command
line, rather than the console.

Dont poll the endpoints


This causes extra flows, slightly perturbing whats being measured. Only poll if you suspect
somethings gone wrong, in which case you should probably start over anyway.

Dont validate data upon receipt or examine CPU utilization


Data validation and collection of CPU utilization consume extra CPU cycles at the endpoints.

Do not run other software on the endpoint computers, and turn off screen savers

141

142

Char iot User Guide, ver sion 3.1

Throughput Testing
In addition, consider these recommendations when attempting to measure maximum throughput.
Use the scr iptFILESNDL.SCR when testing for maximum throughput.
This script sends 100,000 bytes from Endpoint 1 to Endpoint 2, then waits for an acknowledgement.
This simulates the core file transfer transaction done by many applications. Experiment with its
file_size variable. Weve seen that a good size is 1 MByte (10 times FILESNDLs default of 100,000
bytes), but 10 MBytes is worth experimenting with.

If using the TCP protocol, experiment with Setting the TCP Receive Window.

Use symmetrical endpoint pairs when simulating multiple pairs.


Add pairs two at a time. For example, set up computer A as Endpoint 1 in pair 1, and computer A as
Endpoint 2 in pair 2.

If using the FTPGET or FTPPUT scripts, dont set the number of repetitions greater than 1.

Streaming Testing
For streaming scripts, dont set your file size or buffer size too low when sending at a high data rate.
Small sizes cause Endpoint 1 to generate too many timing records. Large sizes avoid an aggregate
throughput value thats greater than the networks capacity (which can only occur, by the way, with nonzero SLEEP times). See in the Viewing the Results chapter on page 117 for more information.

How Long Should a Performance Test Run?


Designing a good test can require a bit of experimenting. The ideal Chariot test has the following
qualities:
It runs long enough so that the relative precision shown on the test results is small. See Relative
Precision on page 129 in the Viewing the Results chapter to understand the quality of your test results.
Timing record durations should be long enough to avoid timer errors. See Understanding Timing on
page 129 in the Viewing the Results chapter for a discussion of the timer resolution and its effect on test
accuracy. Avoid any timing records shorter than 10 ms.

Timing records are taken frequently enough to show fluctuations in performance during the test.

There arent too many timing records. Tests with more than 10,000 timing records use up
considerable memory and disk space, and make the Chariot GUI more cumbersome.

A high relative precision value means that you are either running in a network whose performance is
fluctuating, or that your test is too short. Assuming you dont want to change the network you are testing,
you need to make the test run longer. Most performance tests should last between two and five minutes.
You control the duration of a test by varying the number of endpoint pairs, the number of timing records
to generate, the number of transactions per timing record, and the amount of data sent in each transaction.
For the sake of simplicity, lets assume that the number of endpoint pairs and the amount of data are
fixed.
If you have one timing record for each transaction, you can see the performance fluctuations that occur
from transaction to transaction. In a large test running small transactions like the Credit Check script,

Tips for Testing

143

you could generate hundreds of thousands of timing records. This would require a tremendous amount of
memory at the console and could require a lot of disk space. On the other hand, if you only generated one
timing record for an entire test, the results would be simple to work with, but you would not be able to see
any variation in performance from transaction to transaction. The trick is to find a balance.
See Avoiding Too Many Timing Records on page 148 for information on reducing the number of
timing records and avoiding short timing records.

Multiple Endpoint Pairs on One Computer


Running a large number of endpoint pairs on a single computer can create interesting results. We have
seen two different situations of which you should be aware.
First, in both TCP/IP and APPC, we have seen that the protocol stacks allow scripts that create large data
flows, such as file transfers, to dominate access to the protocol stack. This means that a disproportionate
percentage of the data traffic comes from the larger transactions. If you are trying to get a good mix of
small and large transactions, we recommend using one endpoint system for each type of transaction.
Second, even though multitasking operating systems are supposed to be fair about giving different
processes equal amounts of time, you will always see variations. For example, when running 20
connections on one system, we normally see about 20% difference in performance between the highest and
lowest. If you are testing an intermediate system, such as a switch or router, this doesnt make much
difference, because you are mostly concerned with the aggregate throughput of all the pairs.

Short versus Long Connections


Many of the Benchmark application scripts come in two versions, short and long connections. See the
Messages and Application Scripts manual for a description of the difference between short and long
connections.
The performance difference between short and long connections can be dramatic. How do you decide
which to run? Here are several guidelines that make the decision simple.

If you are trying to emulate a particular application, use the same connection type that the application
does.

If you are trying to test a network backbone or the equipment that supports one, use long connections.
Because the test endpoints dont have to go through the overhead of starting and ending the
connection, each endpoint pair can create considerably more traffic.

If you are trying to test a network protocol stack, run a mix of long and short connections.

144

Chariot User Guide, version 3.1

Using Chariot for Stress Testing


Building stress tests with Chariot is simple. This section offers suggestions to help you do better stress
testing. These suggestions are generally the opposite of those used for real performance testingthey
cause lots of network traffic, really stressing your hardware and software.

Run for a fixed duration


Decide how long you want to stress your network and endpoint computers. However, do some
experimentingyou dont want to return thousands of timing records to the console that you arent
going to use anyway.
Set a low value for the number_of_timing_records script variable, and a high value for the
transactions_per_record variable. Multiply your transactions_per_record by 10 the number of
hours your test runs; for example, increase it by 10 times for 1 hour, 100 times for 10 hours, and so
on. Note, however, that the number_of_timing_records variable is ignored when running for a
fixed duration. Be sure to experiment with high transactions_per_record settings to avoid trashing
the console with too many timing records.

Report timings using real-time


Real-time reporting causes timing records to flow across the network as theyre generated, increasing
the amount of network traffic.

Regularly poll the endpoints


This causes extra flows, outside of the pattern of scripts and timing records.

Validate data upon receipt


You might as well validate all the data that transferred among endpoints during stress conditions, to
see if there are any problems with your network hardware and software.

Use random SLEEP times


Use the uniform distribution of sleep times to emulate many users, pausing slightly between
transactions. Choose of range of about 0 to 2 seconds (2000 milliseconds).

Set your SEND data type to NOCOMPRESS


This is the toughest data to compressyoull keep network components that do compression busy
trying to find patterns in this data.

Using RUNTST for Stress and Regression Testing


RUNTST is the command-line program that executes the same Chariot tests you run at the Chariot
console. It can be a valuable tool for doing regression and stress tests. Using the Chariot console, you can
create and save a variety of tests, which you can then run from RUNTST. Here is an example of a short
batch file that continues executing three Chariot tests, one after another.
:top
RUNTST tests/test1.tst
RUNTST tests/test2.tst
RUNTST tests/test3.tst
goto top

Tips for Testing

145

Because RUNTST is run from the command line, it is easy to combine it with other networking tools to
build even larger, more complex tests. See RUNTSTRunning Tests on page 111 in the Using the
Command-Line Programs chapter for more information on its operation.

Getting Consistent Results


The endpoint programs have been carefully designed to give identical behavior on identical computers,
whether playing the role of Endpoint 1 or Endpoint 2. For example, lets say you create an endpoint pair,
executing a script from address A to address B. Given that your network performs the same in different
directions, the results you obtain should be statistically identical to those obtained by executing the same
script from address B to address Aif the two computers are identical, and the results are returned to the
console using the Batch reporting method.
To increase the consistency of your results, use the original test that you saved. Recreating the test
manually increases the likelihood that there will be slight differences from the original test that could
produce different results. Be sure that you do not modify the Run Options for the test. Changing these
options will generate inconsistent results.
Remember that the endpoints use your actual equipment and make the same API calls as your networking
applications. It is thus very sensitive to a wide variety of differences in the hardware and software it uses,
just as your real applications are. These differences can significantly affect the results you see. Executing
a script from A to B can give much different results from executing the same script from B to A, if the two
computers differ. This can be disconcerting when executing performance benchmarks, unless you
carefully control all aspects of the hardware and software you are using.
Here are some of the factors weve found in our testing that can affect the consistency of your results.
Weve listed them in the order of significance that weve observed.
Endpoint operating system
Operating systems vary significantly in the ways they handle network calls. For example, you can use
the same Intel-based computer, yet see differences in the speed with which they execute TCP calls on
Windows 3.1, 95, 98, or NT. Dont substitute different computers and operating systems without
expecting to see differences. (The Windows 3.1 endpoint, without true multitasking, performs poorly
in the role of Endpoint 1.)
Adapter cards and device drivers
Each different adapter, modem, and device driver has different performance characteristics. An
adapter with new microcode will likely perform differently from an older one.
CPU speed
Aside from SLEEPs, scripts execute on endpoints as fast as their CPU and network hardware allow,
unless limited by the send_data_rate variable.
Protocol stack
Network protocol software always changes its performance behavior from release to release. Every
different protocol stack varies from those by other manufacturers.
For example, the NNTP script encounters a part of TCPs delayed acknowledgment (ACK) algorithm.
At the end of the NNTP script there is a loop where Endpoint 1 sends 25 bytes, Endpoint 2 receives
25 bytes and sends 1,500 bytes back to Endpoint 1, which receives the 1,500 bytes. If the TCP/IP
stack on Endpoint 1 has implemented the delayed ACK algorithm, it typically waits (50-200ms)
before sending back an ACK. In the NNTP script, the send of 1,500 bytes is broken up into 2 frames
(one of 1,460 bytes and one of 40 bytes). The 1,460 byte frame gets sent and the TCP/IP stack on
Endpoint 1 using the delayed ACK algorithm may wait up to 200ms before sending the ACK. After

146

Chariot User Guide, version 3.1

the ACK is sent, the 40-byte buffer is sent by Endpoint 2. This time, Endpoint 1s receipt of 1,500
bytes is satisfied and Endpoint 1 sends the 25 bytes with the piggybacked ACK. Endpoint 1 is
waiting the 200ms, so each timing record takes about 20 seconds.
We have discovered in our testing that protocol stacks on Linux, NetWare, and IBMs MVS operating
systems have implemented the delayed ACK algorithm. On other operating systems without this
algorithm, the script runs in less than 2 seconds, because the delays are reduced.
Protocol configuration and tuning
Changing a network windowing parameter or buffer size in any piece of network hardware or
software will vary the results you see.
Network Configuration
Modifying the settings on the routers and switches affects the computer performance. Changing the
physical configuration of the network can also produce inconsistent results.
Other active programs
If you have other programs active alongside the endpoint program; they compete for the system CPU,
affecting performance. This is especially noticeable with DOS and Windows applications.
Other network activity
Unless you are using dedicated wiring between endpoint programs, you may be competing for
network bandwidth with programs running on other computers. Competing traffic affects test results.
Weve also noticed that a network adapter can keep busy handling excessive broadcast or multicast
traffic in a network.
Console and endpoint together in one computer
The Chariot console and endpoint can reside together in the same computer. However, the endpoint
program is competing with the console for resources. We recommend not using the endpoint at the
console computer when doing serious performance measurements.
Batch vs. real-time reporting
Always use Batch reporting for serious performance measurements.
Foreground vs. background endpoint programs
Different operating systems behave differently in how they allocate resources to programs that run in
the foreground, in the background, as an icon, or as a service or detached. Often, this behavior is
tunable.
Available RAM and disk swapping
The amount of RAM in a computer affects program performance in many ways. Performance
degrades significantly whenever swapping to disk occurs (that is, there isnt enough physical RAM).
Screen savers
The screen savers many of us have were designed to be displayed when you were at lunch. They
consume CPU resources mightily when they kick in. Make sure they arent active on endpoint
computers while taking serious measurements.

Take Care When Changing Software Versions


Make particular note of software versions, BIOS levels, CSDs, fix packs, and service packs used on the
endpoints. All software we know has its behavior altered with each new change. Some software is being
continually improved by its developers, trying to get ever better performance. Other software grows in
features, which adds more paths through the software. Sometimes, additional internal checking is added
to make the software more robust, which slows its overall performance.

Tips for Testing

147

To get consistent results between a pair of computers, every piece of relevant software must be at exactly
the same version and fix level. This includes the BIOS software, the operating system software, the
network device drivers, the network protocol stacks, and the Chariot software itself.

Guidelines for Choosing Your Data


Chariot lets you choose the data sent between endpoints. Two of the available datatypes are ZEROS (all
zeros) and NOCOMPRESS (randomly-generated bytes). About a dozen standard types of data are
installed with the endpoints, representing common type of computer data. These are standard data files
from the Calgary Text Compression Corpus.
The Calgary Text Compression Corpus was created by Ian Witten and Tim Bell, researchers from New
Zealand who worked at the University of Calgary, Canada. Their establishment of a standard test suite
and the collection of reference results has proved valuable to the data compression community.
Many different types of text are represented, and to confirm that the performance of schemes is consistent
for any given type, many of the types have more than one representative. Normal English, both fiction
and non-fiction, is represented by two books and papers (labeled BOOK1, BOOK2, PAPER1, PAPER2).
More unusual styles of English writing are found in a bibliography (BIB) and a batch of unedited news
articles (NEWS). Three computer programs represent artificial languages (PROGC, PROGL, PROGP).
A transcript of a terminal session (TRANS) is included to indicate the increase in speed that could be
achieved by applying compression to a slow line to a terminal. All of the files mentioned so far use ASCII
encoding. Some non-ASCII files are also included: some geophysical data (GEO), a GIF file (LENA),
and a black and white bitmap picture (PIC). The file GEO is particularly difficult to compress, because it
contains a wide range of data values, while the file PIC is highly compressible, because of large amounts
of white space in the picture, represented by long runs of zeros. For more information, see the following
Web site: http://links.uwaterloo.ca/calgary.corpus.html.
In addition, you can supply up to ten files with your own data. Name them USER01.CMP through
USER10.CMP. See Creating Your Own User Data Files on page 147 for more information.
For most tests, ZEROS provides the fastest and most uniform type of data. However, you may want to test
the data-compression capabilities of the software and hardware in your network. NOCOMPRESS causes
the endpoints to generate random data that is difficult to compress. The selection of data files from the
Calgary Corpus gives you the ability to control precisely the contents of what is sent.
The data for types ZEROS and NOCOMPRESS are internally generated by the endpoints. All other types
require .CMP files to be installed on the endpoints.

Creating Your Own User Data Files


You can create up to ten files, to provide as the data for SEND commands in your scripts. You would
supply your own user data for the following situation:

Data compression is done by the hardware or software in your network, and you want to test
performance with the compression activated.

You have collected a representative sample of the data in your network.

148

Chariot User Guide, version 3.1

Put the data in one or more files. Name the first file USER01.CMP, the second file USER02.CMP, and so
on. Put your data in the file starting at the beginning; no header is necessary. When sending the data, an
endpoint works sequentially through the file; when it reaches the end, it will wrap, appending the first
byte of the file to the last.
The same USERxx.CMP file must reside at all the endpoints you want to test with, and should be placed in
the CMPFILES subdirectory along with the other Ganymede-supplied .CMP files. Files with the same
name must contain the exact same data on each endpoint where they are used if data validation will be
used.
Performance Considerations
A USERxx.CMP file shouldnt be larger than 1 MByte. An endpoint loads the whole file into memory
when the test is started (to avoid disk I/O while the test is running), and it limits the size to one megabyte.
However, the file shouldnt be too small. The best data compression devices we know can adjust to
patterns in data up to 120 KBytes in length.
If your USERxx.CMP file is too large to fit in memory, youll encounter error message CHR0270. Try
increasing the limit placed by UNIX systems upon shared memory segments, as described below.

On AIX, Compaq Tru64 UNIX (Digital UNIX), and Solaris, the limit is the largest amount of
memory a process can allocate, which is determined by the amount of virtual memory.

On Linux, there is no way to configure a larger shared memory segment. The limit depends on the
implementation of Linux.

On HP-UX, use the HP-UX SAM facility:

as a root user, start SAM by typing sam


open the Kernel Configuration menu
open the Configurable Parameters menu
increase the shmmax parameter as necessary

Avoiding Too Many Timing Records


Since endpoints take their measurements at the top of the protocol stacks, many operating system and
computer factors may be involved. Its important to avoid short timing records. Correspondingly, avoid
having too many timing records in your test results. This will take some experimenting, since the
transactions_per_record variable you use should be different when youre testing 56k modems from
when youre testing Gigabit Ethernet. As a couple of absolute rules:

Avoid any timing records shorter than 10 ms.

Avoid more than 10,000 total timing records.

When you run a test for a fixed duration, an endpoint ignores the number_of_timing_records script
variable. This section includes discussions on Using Non-Streaming Scripts and Using Streaming
Scripts, because you change different script variables depending on the type of script.
If the test uses a non-streaming script and has no sleep times, or uses a streaming script and uses
UNLIMITED for the send_data_rate, an endpoint runs as many transactions during that time as it can.
If the test uses a streaming script, the number of transactions that Endpoint 1 runs is based on the
send_data_rate. When you run a test for a duration much greater than typically required, you greatly

Tips for Testing

149

increase the number of timing records the endpoints generate. Chariot becomes cumbersome when the
number of returned timing records is above 10,000.
For example, you may run a test with one endpoint pair which generates on your network 100 timing
records in 20 seconds. If you run the same test with the fixed duration set to one hour, Chariot generates
approximately 18,000 timing records. Additional pairs multiply the number.

Using Non-Streaming Scripts


When running a test for a fixed duration, we recommend increasing the scripts transactions_per_record
values so the tests total number of timing records is less than 10,000.
Heres a way to reduce the number of timing records by a tenth:
1.

In the Test window, press the toolbar Add pair/Edit pair icon. When the dialog is shown, you can
open or edit a script, and change the endpoint pairs addresses and protocols.

2.

In the Add or Edit an Endpoint Pair dialog, click on the Open a script file button, which opens a list
of scripts. Once you have chosen a non-streaming script, click on the Edit this script button.

3.

An Edit a Script dialog is shown. Double-click on transactions_per_record on the bottom half of


the dialog.

4.

Increase by ten times the Current Value variable for the transactions_per_record. For example, if
the number of transactions_per_record is 50, change the number to 500. Save your changes by
pressing the OK button on the open dialogs.

5.

From the Run menu, select the Set run options menu item. Click the Run for a fixed duration
button. In the Duration field, select one (1) minute.

6.

Run the test and view the results. Using the Raw Data Totals tab, look at the Number of Records.
This is the total number of timing records generated in one minute. We recommend about 50 to 100
records per pair for good statistical significance. If the results contain more than a total of 10,000
timing records, go back and further increase the transactions_per_record. Otherwise, continue to
the next step.

7.

Increase the scripts transactions_per_record variable to match the duration of your test:

Here is the formula to determine the number of transactions per record:


transactions_per_record = (number of minutes to run) *
(transactions_per_record in the one-minute
test)

Example:
In the steps above, you entered 500 for the scripts transactions_per_record variable and ran the test for
one minute. The results should have been about 300 timing records per pair (100 per 20 seconds = 300
per minute). 300 timing records per pair is a fine number if you are not running hundreds of concurrent
endpoint pairs.
Now you would like to run the test for a weekend (48 hours). Consider the math:

transactions_per_record = (60 minutes per hour) x (48 hours) x (500 transactions per record)

transactions_per_record = (2,880 minutes) x (500 transactions per record)

transactions_per_record = 1,440,000

150

Chariot User Guide, version 3.1

Open the script and change the transactions_per_record variable to 1,440,000. Then, change the
Duration to 48 hours in the Set Run Options dialog.

Using Streaming Scripts


Streaming scripts do not use the transactions_per_record variable; a timing record is generated at the
end of each SEND command. To reduce the number of timing records, increase the file_size variable on
the SEND command. The file_size variable controls the amount of data per transaction.
If you specify a send_data_rate other than UNLIMITED, you can use the following equations to estimate
the number of timing records you will receive. The basic network equation is:
time of timing record = # of bits / bits per second

Thus, for the using script variables, the equation is:


time of timing record = (file_size * 8) / send_data_rate

For example, if you are using REALAUD.SCR with the script default parameters, you can expect each
timing record to take about 1.39 seconds: (14,040 bytes * 8)/80,736 bps.
Use the following formula to determine the value to use for the file_size variable:
file_size = time of timing record / (# of timing
records*(send_data_rate*8))

If you are running a script at full speed, that is, with the a send_data_rate set to UNLIMITED, heres a
way to reduce the number of timing records by a tenth:
1.

In the Test window, press the toolbar Add pair/Edit pair icon. When the dialog is shown, you can
open or edit a script, and change the endpoint pairs addresses and protocols.

2.

In the Add or Edit an Endpoint Pair dialog, click on the Open a script file button, which displays a
list of scripts. Once you have chosen a streaming script, click on the Edit this script button.

3.

An Edit a Script dialog is shown. Double-click on the file_size variable on the bottom half of the
dialog.

4.

Increase by ten times the Current Value for the file_size. For example, if the number of file_size is
3,000 bytes, change the number to 30,000.

5.

From the Run menu, select the Set run options menu item. Click the Run for a fixed duration
button. In the Duration field, select one (1) minute.

6.

Run the test and view the results. Using the Raw Data Totals tab, look at the Number of Records.
This is the total number of timing records generated in one minute. We recommend about 50 to 100
records per pair for good statistical significance. If the results contain more than a total of 10,000
timing records, go back and further increase the file_size variable.

Automating Tests To Form a Simple Monitor


Although Chariot is not a network monitor, you can still create a test that tracks network performance.
From the Chariot console, create a test with endpoint pairs that have the script and network addresses that
you want to measure. For example, set the number_of_timing_records variable to 1000, and the
delay_between_transactions variable to 600,000 in each script. Because the delay is in milliseconds,

Tips for Testing

151

600,000 causes a delay of 600 seconds (ten minutes). Set the Run Options to show results in Real-time.
When you start the test, you will see the results updated every ten minutes. If you stop the test, you can
see the endpoint details to find how your network performance changes during the time of your test.
Ganymede Softwares PegasusTM family of products provides end-to-end performance monitoring and
reporting from an end-user perspective. Pegasus alerts network managers of performance problems before
their users complain., and provides unique information about network performance. While other tools
look into the network, pointing to problems on specific segments or ports, Pegasus views performance
through the network, providing a single, accurate view of network response time and connectivity.
Pegasus works well with Chariot to provide the best enterprise performance management solutions for
corporate enterprise networks. Use Chariot to predict performance and select equipment, then use
Pegasus to monitor the network. Once you detect performance problems, use Chariot again to test the
network after changes and establish baselines for Pegasus monitoring.

152

Chariot User Guide, version 3.1

Troubleshooting

153

Troubleshooting
Its possible that youll run into problems running Chariot. Problems are generally related to how the
communications software is set up and how the tests are configured.
This chapter helps you find the information necessary to solve problems you encounter. It also describes some
common problems, and how to solve them.
Also, see the Working with the Error Log Viewer section on page 96 for more information.

Reading Error Messages


Chariot errors are reported at the console with the Error Message dialog. Youll recognize it by its stop sign
icon in the upper left-hand corner.
The top line identifies which computer detected the error or which file was being used. When an error occurs
while running a test, the next line in the dialog names the pair number and the network names of Endpoint 1
and Endpoint 2. This field can be scrolled to the right, if you have long names.
Next is a block of text showing the primary error message. Heres an example of the primary message:
CHR0149: There was no remote program waiting to
accept our TCP sockets connection.

Press the Message help button at the bottom of this dialog. This is probably the most important button in
Chariot! Weve worked hard to make the Message Help thorough, accurate, and helpful. We welcome your
feedback on ways we can continue to improve the help thats provided.
Sometimes there is secondary error information shown in this dialog, to provide further isolation for the
problem.
The Show details button gives advanced technical information about the problem. For example, it shows the
return code number for failed communications calls.

For SPX and TCP problems, it shows the port number and call number, among other values.

For APPC problems, it shows the fully-qualified LU name and TP name.

All communications errors reported to the console, or that the console detects, are written to the consoles error
log. The path and name for this error log file is shown at the bottom of the error message dialog.
See the Messages and Application Scripts manual for a numerical listing of the Chariot error messages.

154

Chariot User Guide, version 3.1

Viewing Detailed Error Information


You reach the Detailed Error Information dialog by pressing the Show details button on an error message.
It is intended for technical personnel, giving them the details that were also written to the error log.
The date and time of the error is always listed first, followed by where the error was detected.
The text of the primary and optional secondary messages follows, including their error numbers. You can copy
text in this dialog to the clipboard by highlighting it with your mouse, then pressing the Copy key (Ctrl+C).
Text that has been copied to the clipboard can be pasted into any compatible application.
The details that are logged next vary, depending on the type of error and the network protocol being used.
Network personnel and communications programmers might find these details helpful in debugging failures.
Some of the details are useful only to Ganymede Software Customer Care.

Determining Which Computer Detected the Error


The first step in troubleshooting is usually determining which computer detected the error.
For most errors involving setup and file manipulation, the console is the computer that detects the error.
For errors that occur while running a test, the error could have been detected on a particular endpoint pair by
the Console, by Endpoint 1, or by Endpoint 2. The program that detects the error reports to the console, which
shows the error and logs it. The first line of the console error message tells which computer detected the error.
If theres some reason you cant see the error at the console, examine the error log at the endpoints involved in
the problem. (See the Working with the Error Log Viewer section on page 1 for more information.) A
formatted error log entry should contain a line of the following form:
Error was detected by Endpoint 1.

If the error was detected by Endpoint 1 or Endpoint 2, check the Test Setup at the console to determine the
actual network address of the computer where the error was detected. All Operator Actions described by the
message help should be taken at the computer which detected the error, unless otherwise specified.
Although one computer may detect an error, the solution may actually lie elsewhere. For example, if Endpoint
1 detects an error indicating that a network connection could not be established, it may be because there is a
configuration error in the middle of the network or at Endpoint 2.

Troubleshooting

155

Common Problems
Here are some possible problems you may encounter.

Insufficient Threads
The Chariot console creates one or more threads for each endpoint pair when running a test. This is in
addition to the threads created by the underlying network software (as well as those used by other concurrentlyrunning applications).
In our testing, we did not exhaust threads in our default settings for Windows NT until we reached about 7000
threads. We dont believe you should encounter out-of-threads problems with Windows NT; please let us know
if you do.
Windows 95 and Windows 98 are much more severely limited in their thread capacity.

Insufficient Resources
If you receive an insufficient resource error while running Chariot, your computer has run out of the amount of
memory required to successfully run Chariot. You should close other applications that you currently have
running and then restart Chariot.

Protection Faults and Traps


Protection faults or traps are the operating systems way of telling you when a program is trying to use memory
that it doesnt own. They can occur in a Chariot program, in any library routines called by Chariot, or in the
operating system itself. The default way that theyre handled is with a popup message box to the screen. This
popup shows program instruction values in hex, which arent helpful to you as a user.
If using Windows NT
Windows NT writes an entry to a file named DRWTSN32.LOG when it encounters a trap or protection fault.
This file is written to the directory where you installed Windows NT. The default location is C:\WINNT.
It contains information that is immensely helpful to us in finding and fixing bugs.

Assertion Failures
Chariot does a lot of internal checking on itself. You may see the symptoms of this checking with an
Assertion failed message. If you see this in a message at the console, it asks whether you want to Exit. The
best choice is Yeschoosing No probably results in a protection fault or yet another Assertion Failure. If
you choose No and you are able to continue (that is, the bug was minor), we recommend saving your test files
as soon as possible.
If you encounter an assertion failure, please write down the sequence of things you were doing when it
occurred. Chariot captures details related to the problem in an ASCII text file named ASSERT.ERR in your
Chariot directory. Save a copy of the ASSERT.ERR file, and send it back to us via e-mail.

156

Chariot User Guide, version 3.1

Damaged Files
Binary files can be damaged (that is, truncated) if you copy them using wildcards at a command prompt. The
problem occurs when the hex character X1A is encountered. For example, the following command is the
wrong way to copy a binary script file for the database-update transactions:
COPY DB*.SCR TEMP.SCR

Doing a DIR for file DBASES.SCR and file TEMP.SCR should show that the COPY command has truncated the
file. The following command is also the wrong way to copy binary files:
COPY DB*.SCR TEMP.SCR /B

This creates a truncated file that is even a byte smaller than the previous copy. Heres the RIGHT way to copy
binary filesdont use wildcards!
COPY DBASES.SCR TEMP.SCR

Locale Could Not Be Determined


The locale file tells Chariot the language of the version of Chariot that you are using. Based on the language,
Chariot determines what time and date format, the comma separation, and the type of money symbol to use.
The locale file is located in the directory where Chariot is installed. The format of the file LANG[XXX].LCL
where XXX is the code for the language for the version of Chariot. For example, if you are using the English
version of Chariot the name of the locale file is LANGENU.LCL. The settings pre-defined in the locale file
override any settings you select on the Regional Settings in the Control Panel.
If the locale file is not located in the directory where Chariot is located, Chariot will not run and you will
receive an error message. The file would not be located in the directory because of an install error or because
the file was deleted. You will need to reinstall Chariot and then the locale file will be reinstalled. Chariot
should now run correctly.

If You Find a Problem


We need your help in collecting information to pinpoint the problems you might encounter.
1.

Write down what you were doing when the problem occurred.
Most important is the sequence of steps you took, the keys you pressed or menu selections you made, and
the files you were working with.

2.

Chariot Test and Results files


Capture the test file you were using when the problem occurred (a binary file created at the console, with
extension .TST).

3.

Chariot Debugging files


Capture any debugging files generated by our product:

ASSERT.ERR (an ASCII file)

CHARIOT.LOG, RUNTST.LOG, CLONETST.LOG, ENDPOINT.LOG (binary files)

Troubleshooting

157

Protection fault information (ASCII files)

For Windows NT, file DRWTSN32.LOG (an ASCII file)


4.

Network configuration files


Capture the binary configuration files for the network protocols you are using:
For Windows NT:
For TCP/IP, copy the files found in the following directory:
d:\path\SYSTEM32\DRIVERS\ETC

where d: and path are the drive and path where you installed Windows NT.
For Windows 95/98:
For TCP/IP, copy the HOSTS and SERVICES files found in the following directory:
d:\path\WINDOWS

where d: and path are the drive and path where you installed Windows 95/98.
5.

Network Trace files


To recreate a problem, we may ask you to activate tracing for the network protocols you are using. If you
recreate the problem with tracing active, well need the resulting trace files.

6.

Operating System configuration files


Chariot writes entries in the Windows Registry. If you are familiar with using the REGEDT32 command,
you may consider capturing a text listing of the Registry.

Functional Limitations and Known Problems


The latest functional limitations in this version of Chariot are described in the README.TXT file. A Readme
file is installed at the console and at the endpoints. See the Ganymede Customer Care chapter for the URL of
our Web site, where we track the latest bugs and fixes.

Known Problems in Microsofts TCP/IP for Windows NT


Microsofts TCP/IP stack is sensitive to extended barrages of short connections. Performance will remain
consistent for awhile, but you can expect to see delays of 3 or 6 seconds periodically as TCP/IP recovers its
internal control blocks.

Known Problems in Microsofts IPX/SPX for Windows NT

6-second delay, from Windows NT 4.0 to NetWare 4.x


When making connections from Windows NT 4.0 to NetWare 4.x, theres a consistent 6-second delay in
the processing. This is fixed by Microsofts Windows NT 4.0 Service Pack 3.

Windows NT SPX protocol slows down during loopback


The SPX protocol supplied by Microsoft in Windows NT 4.0 and 3.51 is subject to slowdowns when
running to itself, that is, with loopback.

158

Chariot User Guide, version 3.1

Known Problems in Microsofts SNA Server


If youre using Microsofts SNA Server, we strongly recommend version 4.0.
At a minimum, SNA Server version 2.11 with Service Pack 2 is essential if you hope to do much extended
testing. With SNA Server 2.11 there are still numerous problems. Many of these can potentially lock up the
SNA Server Clients, requiring them to be rebooted.
Weve also found vast differences between the performance and stability of the different LAN protocols which
SNA Server uses to connect Clients to itself. The worst performance and least stable tests involve running
multiple pairs from Client to Client. Better performance and stability occur running from SNA Server Client
to SNA Server, and the best performance occurs running from SNA Server to other genuine APPC computers,
such Communications Server for NT or another SNA Server.
Here are the major items weve seen, grouped by LAN protocol.
TCP/IP
Overall, having SNA Server Clients connect over TCP/IP gives the best performance and stability.
However, running more than about 3 pairs per endpoint often causes hangs during the test. These are
usually reported with Chariot message CHR0257. Refer to the help for this message for potential
solutions.
Named Pipes
This is somewhat stable, though performance is approximately half that of TCP/IP. Additionally, running
more than a few tests at the same endpoint will often return with the SNA Server Client reporting Out of
Memoryeven on computers with substantial amounts of RAM.
IPX/SPX
Running even one pair using a script with good throughput characteristics runs well for a short period of
time (about 20 or 30 seconds) and then performance will decrease several orders of magnitude.
Thus, the problems weve encountered generally center around exceeding the low capacity of the SNA Server.
Error #625 may occasionally occur while booting, when the endpoint is using SNA Server: The SnaBase
service must be started before any Service which uses SnaBase. The only workaround is to start the endpoint
service manually. Watch our Web site for updates and workarounds. Were closely tracking Microsofts fixes
and bulletins.

Known Problems in IBMs Communications Server for Windows NT

MVS reports CHR0129 during a connection


Occasionally when running throughput intensive tests from CS/NT to MVS, MVS will report CHR0129
An established APPC connection failed during processing with sense data of 0x20010000.
Solution: IBM is working on a fix. Check Ganymede Softwares Web site for more information.

CS/NT version 5.00 returns sense data of 0x00000000


CS/NT version 5.00 returns sense data of 0x00000000 when a connection fails due to reaching the session
limit for the mode specified.
Solution: If sense data 0x00000000 is received, consider whether this may be your problem.

Troubleshooting

159

CS/NT version 5.01 returns unexpected sense data.


In CS/NT version 5.01, all sense data is essentially unusable.
Solution: IBM is working on a fix. Check Ganymede Softwares Web site for more information.

Known Problems in IBMs Personal Communications for Windows


NT

Problems encountered initializing APPC


V4.1 and V4.11 of Personal Communications for Windows NT have a problem initializing APPC in some
cases. This manifests itself with Chariot if all the following are true:
1.

A Windows NT computer has endpoint software installed,

2.

The same computer uses V4.1 or V4.11 of Personal Communications for Windows NT,

3.

The computer is used in a Chariot test as Endpoint 1, and

4.

The network protocol for Console to Endpoint 1 is APPC.

Symptom: Test pair will initialize fine, go to running state, but never return any timing records (that is, it
will stay at 0 of XXXX timing records reported) and will never complete.
Solution: A fix is provided by IBM for this problem with Personal Communications for Windows NT. It
can be obtained from: ftp://ps.software.ibm.com/ps/products/pcom/fixes/v4.1x/winnt/ic16809

This fix requires the user to install several files in several different directories, which can become time
consuming if this fix is needed on multiple computers. Since IBM does not ship any installation aid for
this fix, Ganymede Software has written an installation .BAT file to help with the install. Download
INSTFIX.ZIP, unzip into the same directory where the IC16809.EXE file has been unzipped, and type
INSTFIX for directions.

No TP view in SNA Node Operations


After running approximately 580 APPC tests to a Windows NT endpoint as Endpoint 2, PCOMMs SNA
Node Operations program can no longer successfully show the Transaction Programs view.
Solution: IBM has fixed this and it will appear in the next version of PCOMM.

Known Problems in IBMs Personal Communications for Windows


95
PCOMM version 4.21 for Windows 95 has a bug, when installed with TCP/IP on Windows 98. One of the
AnyNet DLLs erroneously displaces a key TCP/IP DLL. To use APPC and TCP/IP together on Windows 98,
you must download Personal Communications v4.2 CSD1 from:
http://www.software.ibm.com/network/pcomm/support/fixes/fixes_pcommwin32_csd1.html

160

Chariot User Guide, version 3.1

Getting the Latest Fixes and Service Updates


Weve found communications software often to be fragile. Its developers are constantly working to make it
more robust, as the software gets used in an ever-wider set of situations.
We recommend working with the very latest software for the underlying operating system and communications
software. Here are the best sources weve found for the software used by the console and endpoints.
Additional information is described in the individual endpoint chapters at the back of this manual.
See the Ganymede Software Web site at http://www.ganymede.com/support for links to the latest software
updates.

Updates for Microsoft Windows NT


Microsoft posts code and driver updates directly to the following anonymous FTP site at
ftp://198.105.232.37/fixes/usa/.

Updates for Microsoft Windows 95


Microsoft posts code and driver updates directly to the following Web site:
http://www.microsoft.com/windows95/default.asp. Select the Downloads link.

Updates for Microsoft Windows 98


Microsoft posts code and driver updates directly to the following Web site:
http://www.microsoft.com/windows98/default.asp. Select the Downloads link.

Updates for Microsoft SNA Server


Microsoft posts code and driver updates directly to the following Web site:
http://www.microsoft.com/sna/default.asp
We strongly recommend that users of Microsoft SNA Server version 2.11 apply Service Pack 2 to the SNA
Server. Updates are available from Web site mentioned above. See the README.TXT file associated with
this Service Pack for more information.
Also, file 211SP2IC.EXE (at the above FTP site) is recommended for Windows NT computers running the
SNA Server client version 2.11 software.

Updates for IBMs SNA Software for Windows NT


For information on IBMs Personal Communications (PCOMM) family of software, access the Web site at
http://www.software.ibm.com/network/pcomm/support/

Troubleshooting

For information on IBMs Communications Server for Windows NT, access the Web site at
http://www.software.ibm.com/network/commserver/support/fixes/fixes_csnt.html (fixes)

Updates for Novell Client Software


Novell posts code and driver updates to the following Web site: http://support.novell.com/

161

162

Chariot User Guide, version 3.1

Ganymede Software Customer Care

163

Ganymede Software Customer Care


We like hearing from our customers, no matter what their concerns. We provide both Customer Service
and Technical Support.

Customer Service
For any Ganymede Software product, call Customer Service for:

Upgrade orders
Registration problems
Product information
Referrals to dealers and consultants
Replacement of missing or defective parts (disks, manuals, etc.)
Information about technical support services

For questions about how to use our software, see the Troubleshooting Guidelines section below. In
addition, we keep the Ganymede Software Web site up-to-date with the latest information on all aspects of
our products.
http://www.ganymede.com/
If necessary, you can contact us at:
Ganymede Software Inc.
1100 Perimeter Park Drive, Suite 104
Morrisville, NC 27560-9119
888-GANYMEDE (888-426-9633) toll free in the USA
919-469-0997 (voice)
919-469-5553 (fax)
e-mail: info@ganymede.com

Troubleshooting Guidelines
Our customer care team is always happy to assist you with any problems you encounter. We recommend
you try the following steps before calling for assistance, as you can usually locate efficient solutions for
many common problems in the existing documentation:
1.

Check the Technical Support Web site. Our technical support Web site provides:

Frequently-asked questions (and answers)


Links to the latest fixes and third-party software updates
Downloads of the latest documentation including product user guides and specification sheets
Tuning tips
Application Script Library

164

Chariot User Guide, version 3.1

To reach the technical support Web site, point your browser to


http://www.ganymede.com/support
2.

Review the Troubleshooting chapter in your product's User Guide. This chapter provides solutions
to many common problems as well as information about viewing the error logs and getting the latest
product updates and fixes.

3.

Review the README file. This file contains updated information which does not appear in this
version of the manual. It's a good idea to print this file and keep a copy close at hand.

How to Get Technical Support


If you are still unable to resolve your problem, or if you suspect youve found a bug, contact the Ganymede
Software customer care team for technical support.
Registered users can reach our customer care team at:
888-GANYMEDE (888-426-9633) toll free in the USA
919-469-0997 (voice) (select option #6)
919-469-5553 (fax)
e-mail: info@ganymede.com
You can send us questions by fax or e-mail any time.
We think the best way to contact us directly is by e-mail, because it gives us an easy way to exchange
files.
Telephone support is available Monday through Friday, from 9:00am to 6:00pm Eastern time.
If you are calling back after previously working with one of us, please ask for the same person you
worked with on your previous call. This ensures you can talk to someone already familiar with your
question or problem.

Index

165

Index
A
abandon run, 82
Abandoned status, 84
about box, 60
About Chariot dialog, 60
ACK, 145
add
multicast group, 71
pair(s), 70
addresses
aliases for, 56
aggregate throughput exceeds capacity, 131
aliases, 137
for IPX addresses, 56
ALL CAPS text, 4
APING
testing APPC connections, 25
APPC, 20
bidding for sessions, 136
defining modes for large tests, 135
network address
for IBMs software for Windows NT, 21
for SNA Server, 21
session limits, 135
TP name for endpoints, 26
APPC mode name, 24
application script name, 100
Application scripts
Modifying parameters, 35
APPN, 26
APPN class-of-service (COS), 136
APPN testing, 136
ASSERT.ERR, 156
assertion failures, 155
AUD file, 108
audit log, 108
authorization key, 15
automation of tests, 150
avoiding too many timing records, 148
Axis Details dialog, 75, 76, 77

B
batch reporting, 79
Beta product type, 4
bidding for APPC sessions, 136
binary files, 156
broadcast

definition of, 38
bugs in IBM Communications Server for Windows NT,
158
bugs in IBM PCOMM for Windows NT, 159
bugs in SNA Server, 158
bugs in Windows NT IPX/SPX, 157
bugs in Windows NT TCP/IP, 157
bytes received, 124
bytes sent, 124

C
calculation
response time, 119
throughput, 119
transaction rate, 119, 120
Calgary Corpus
Web site, 147
change display fonts, 48
change display fonts on operations menu, 49
change user settings, 49
undo button, 49
change user settings menu, 48
changing software versions, 146
changing the Run Options, 78
Chariot
directory structure, 15
installation, 11
installation files, 15
package, 11
product types, 4
CHARIOT.LOG, 116
for service, 156
checksum, 109
choosing data types, 147
choosing how to end a test run, 78
choosing how to report timings, 79
Cisco IOS, 29
class-of-service (COS), 136
CLONETST, 114
using for flexible tests, 137
CLONETST.LOG, 116
for service, 156
CM_TP_NOT_AVAILABLE, 25
CM_TP_NOT_RECOGNIZED, 25
Collapse all groups menu item, 72
commas, in CSV format, 66
comma-separated values, 65
Communications Manager/2, 20
Communications Server, 20
comparison

166

Chariot User Guide, version 3.1

opening, 94
saving, 93
Comparison window, 92
compatibility considerations, 2
confidence interval, 128
confidence interval, 95%, 128
in calculating relative precision, 129
ConLoser, 136
connect timeout, 80
connection network, 26
connections
short vs. long, 143
constant value, 101
ConWinner, 136
copy, 67, 69, 94
copying binary files, 156
copying error details to the clipboard, 154
copying test pairs, 114
Courier font text, 4
CPI-C return code, 25
CPU utilization, 80
CRC checksum, 108
creating tests, 47
csv file format, 65, 108
defaults, 53
cut, 66

D
damaged files, 156
DAT file, 108
datagram
fragmentation, 37
modifying parameters, 35
performance, factors affecting, 37
Retransmission Timeout Period parameter, 36
simulating applications, 35
Support, 33
Window Size, 36
Datagram
Number of Retransmits before Aborting parameter,
37
tab, 38
datagram applications, 34
datagram defaults, 51
datagram protocol applications, 19
Datagram tab, 88
deactivate HPR, 137
default directories, 50
default protocol, 49
default run options
changing, 50
default service quality, 49
delete, 67
Demo product type, 4
DEREGISTER.DAT file, 17
deregistration key, 17
deselect all pairs, 66, 94
detailed error information, 154

details
endpoint pair, 125
directories
default, 50
directory structure
for console, 15
distribution, 101
DNS Latency, 134
documentation
conventions, 4
online help, 4
domain name
for Windows NT, 27
double-quotes, in CSV format, 66
Dr. Watson, 155
DRWTSN32.LOG, 156
duration of tests, 142

E
edit
multicast group, 71
pair(s), 66, 70
script command parameters, 101
script variables, 101
scripts, 99
edit menu
(Comparison window), 94
elapsed time, 129
e-mail address, 163
end type, 78
endpoint
definition, 7
endpoint configuration, 123
endpoint configuration details, 127
Endpoint Configuration tab, 88
endpoint failures during a run, 80
ENDPOINT.DAT, 70
file description, 108
ENDPOINT.LOG, 116
for service, 156
ERR file, 108
Error detected status, 84
error log
CLONETST, 96
endpoints, 96
location, 96
RUNTST, 96
Error Log Viewer
filtering, 97
log
details, 98
opening, 97
overview, 96
searching, 98
error logs
formatting, 116
Error LogViewer
exporting, 97

Index

error messages, 153


Evaluation product type, 4
evaluation version, 15
Excel, 108
excessive timing records, 142
exiting the Script Editor, 102
Expand all groups menu item, 72
exponential distribution, 101
export to ASCII, 62, 93
export to HTML, 62, 93
export to spreadsheet, 62, 93
exporting
custom options, 64
graphs, 64

F
fairness, 143
file menu
(Error Log Viewer), 97
file type handling, 108
filtering the Error Log Viewer, 97
find
in the error log, 98
Finished status, 84
firewall options
for timing records, 54
Firewalls
Testing through, 138
flexible tests, 137
FMTLOG, 116
FMTTST, 112
format tests, 112
formatting error logs, 116
fully-qualified LU name
defined, 20

G
Ganymede Software
Customer Care, 163
Gbps defined, 52
getting consistent results, 145
getting started, 7
GIF file, 65, 109
GQOS, 29
graphs
as GIF files, 64
bar graph, 76
configuration, 75
histogram, 77
line graph, 75
lost data, 86
Group by menu item, 72
Group sort order menu item, 72

H
handling endpoint failures during a run, 80

167

handling file types, 108


hardware
for Windows 95/98 console, 13
for Windows NT console, 12
help, 4
help menu, 60
host
for Windows NT, 27
how to report timings, 78
HPR deactivation, 137
HPR testing, 136
HTML file, 109
using FMTTST, 112

I
IBM Communications Server for Windows NT, 21
IBM Personal Communications for Windows NT, 21
icons
Chariot, 14
icons on the Test window toolbar, 61
inactive time, 129
Information menu item, 72
INI file, 109
Initialized status, 83
Initializing status, 83
installation files
for console, 15
installing Chariot, 11
installing Chariot 3.1
over Chariot 2.2, 14
insufficient resources, 155
introducing Chariot, 7
IOS, 29
IP addresses, 134
IP Multicast, 42
IPCONFIG
for Windows NT, 27
IPX, 26
configuring Windows NT console, 27
using IPXROUTE on Windows NT console, 27
IPX address aliases, 56
IPXROUTE command
on Windows NT console, 27

J
jitter, 41
results, 121
Jitter tab, 87

K
kbps defined, 52
Kbps defined, 52
KBps defined, 52
keys help
Comparison window, 95
Error Log Viewer, 99

168

Chariot User Guide, version 3.1

Main window, 60
Script Editor, 107
Test window, 91

L
LAN connections, 136
LAN environments with APPN, 26
LANG.LCL, 156
large APPC tests, 135
LCL file, 109
length of tests, 142
license code, 15
Local File, 156
locale file, 109
LOG file, 109
log files
formatting, 116
long connections, 143
loopback
not with SNA Server, 21
lost data, 86
Lost Data Tab, 86
Lotus 1-2-3, 108
lower distribution, 101

M
manual
conventions, 4
mark selected items, 66, 94
marking pairs and groups, 66, 94
Mbps defined, 52
measured time, 129
in confidence interval calculation, 128
in raw data totals, 124
in throughput calculation, 119
menu
change display fonts, 48
change user settings, 48, 93
edit (Comparison window), 94
File, 49
file (Error Log Viewer), 97
file (Main window), 49
help menu, 60
options, 48
options menu, 49
options menu (Error Log Viewer), 98
tools menu, 55
view (Error Log Viewer), 97
window menu, 84
messages, 153
Microsoft Windows 95/98, 13
Microsoft Windows NT, 12
Microsoft WinSock 2, 30
mode names
predefined, 24
modifying application scripts parameters, 35
modifying datagram parameters, 35

monitor function, 150


multicast
definition of, 38
reserved addresses, 43
Time To Live, 39
multicast group
adding/editing, 66, 71
overview, 43
test totals, 118
multimedia
performance, 40
test parameters, 39
multimedia support, 38
multitasking operating systems, 143

N
n/a, 83
NAT, 54, 139
network address, 19
APPC LU name
for IBMs software for Windows NT, 21
for SNA Server, 21
IP, 27
RTP, 27
UDP, 27
Network Address Translation, 54, 139
network applications
connection-oriented vs. connection-less, 33
network node, 26
network node testing, 136
new script, 102
new test, 49
non-secure mode, 136
normal distribution, 101
notebook
options, 49
Novell support, 161
Number of Retransmits before Aborting parameter for
datagrams, 37

O
online help, 4
open a test, 49
open comparison, 93
opening
comparison, 94
operating the console, 47
options menu, 49
options menu (Error Log Viewer), 98
out of resources, 155
out of threads, 155
output template, 62
output templates, 55
adding, 55
copying, 56
defaults, 53
modifying, 55

Index

P
pair
add, 70
edit, 70
paste, 66, 67
PCOMM, 21
Pegasus, 150
Ping, 30
Poisson distribution, 101
Poll endpoints now menu item, 77
polling, 84
polling the endpoints, 82
popups
assertion, 155
port, 54
port number, 134
previous license code, 17
primary messages, 154
print and export options, 62
print options, 93
output templates, 55, 56
output templates, 53
printing
custom options, 64
protection faults, 155
protocol
connection-less, 33
connection-oriented, 33
default, 49
protocols supported by the console, 19
protocols supported by the endpoints, 19

Q
QoS, 28, 57
QoS templates
predefined, 28
Quality of Service, 28, 57
Quality of Service template editor, 58

R
random sleep, 101
raw data totals, 124
Raw Data Totals tab, 87
reading error messages, 153
Readme file
before calling Technical Support, 164
description of known limitations, 157
real time, 79
Real-time Transport Protocol, 40
receive timeout, 39
records
count, 124
registered trademarks, xii
Registration Center, 15
registration number, 15
changing, 54

regression testing, 144


relative precision, 129
to decide how long to run, 142
reliable datagram delivery, 34
removing Chariot, 16
renumber all pairs, 66
replicate multicast group, 72
replicate pair(s), 72
replicating a test, 114
reporting ports, 54
reporting type, 78
Requested stop status, 84
requirements
Windows 95/98, 13
Windows NT, 12
resolution of clock timers, 129
Resolving names status, 83
response time, 119
response time calculation, 119
Response Time tab, 86
results
formatting, 112
of tests, 117
Retail product type, 4
retail version, 15
Retransmission Timeout Period parameter, 36
RSVP, 29
RTP, 40
configuration, 27
run for a fixed duration, 79
for stress testing, 144
Run menu item, 77
run options
default, 50
run until all scripts complete, 79
run until any script completes, 78
Running status, 83
running tests, 47, 81
from the command line, 111
running your first test, 9
RUNTST, 111
for stress and regression testing, 144
for stress testing, 144
RUNTST.LOG, 116
for service, 156

S
saving
comparison, 93
script, 103
scope, 64
SCR file, 109
script
default, 49
Script Editor
adding a new script, 102
edit
parameters, 101

169

170

Chariot User Guide, version 3.1

edit menu, 104


editing variables, 101
exiting, 102
file menu, 102
insert menu, 106
inserting script commands, 106
opening a script, 102
overview, 99
saving, 103
scripts
modifying application parameters, 35
SCRIPTS directory, 15
searching
in the Error Log Viewer, 98
secondary messages, 154
secure mode, 136
select all pairs, 66
Select printer, 94
send rate based data, 38
send_buffer_size, 40
send_data_rate, 38
service quality
default, 49
SERVQUAL.DAT, 70
file description, 108
for mode names, 24
for QoS templates, 28
Set run options menu item, 77
setup program, 14
short connections, 143
shortcut keys
Script Editor, 107
shortcuts
keyboard, 60
show details button, 153
Show endpoint configuration menu item, 72
Show error message menu item, 72
Show timing records menu item, 72
SLEEP, 101
sockets port number, 30
software
for Windows 95/98 console, 13
for Windows NT console, 12
sort, 74
Sort menu item, 72
spreadsheet
export to, 62, 93
SPX, 26
choosing a port number, 134
configuring Windows NT console, 27
SPXDIR.DAT
for IPX address aliases, 137
from IPX/SPX directory, 56
status bar, 62
stdout, 111
Stop menu item, 77
stop run on initialization failure, 80
stopping
when using SPX on Windows NT, 84
stopping a test, 82

Stopping status, 84
streaming
viewing streaming results, 122
stress testing, 144
summary report, 62

T
tabs in the Test window, 61
TCP/IP
choosing a port number, 134
configuration, 27
HOSTS file
for Windows NT, 27
IP address, 27
network address, 27
performance tuning, 133
TCP receive window, 133
testing a connection, 30
Technical support, 164
test automation, 150
test files
default directory, 50
test results
viewing, 117
Test Setup tab, 85
test totals, 118
test window keys help, 91
Test window menu items, 61
Testing through Firewalls, 138
TESTS directory
for Chariot tests, 15
threads, 155
throughput, 119
throughput calculation, 119
Throughput tab, 85
throughput unit default, 52
Throughput units menu item, 72
Time To Live, 39
timing records
printing selected pairs, 64
too many, 142, 148
timing records per pair, 142
tips for testing, 133
toolbar, 61
tools menu, 55
TP name
for endpoints, 26
trademarks, xii
transaction count, 124
transaction rate, 119
transaction rate calculation, 119
Transaction Rate tab, 85
transactions_per_record setting, 144
traps, 155
troubleshooting, 153
TST file, 109
TTL, 39
TXT file, 109

Index

U
UDP
configuration, 27
understanding IP Multicast, 42
understanding multimedia support, 38
understanding reliable datagram delivery, 34
understanding the run status, 83
undo button, 49
unicast
definition of, 38
uniform distribution, 101
uninstall, 16
unmark selected items, 66, 94
unmarking pairs and groups, 66, 94
untitled Test window, 62
upper distribution, 101
USERxx.CMP files, 147

V
validating received data, 81
view error log
location, 98
view menu (Error Log Viewer), 97
viewing the results, 117

W
warnings

changing, 52
Where to read script files, 50
Where to write console error logs, 50
WINAPING, 25
window
Comparison, 92
Test, 61
window menu, 84
Window Size for datagrams, 36
Windows 95
no QoS support, 13
Windows 95/98
console installation, 14
console requirements, 13
Windows 95/98 endpoint
IP network address, 28
Windows 98
QoS support, 13
Windows NT
console installation, 14
console requirements, 12
QoS support in 5.0, 13
Windows NT console IP address, 27
Windows NT endpoint
IP network address, 27
WK3 file, 109
using FMTTST, 112
working with datagrams, 33
working with multimedia support, 33

171

You might also like