Professional Documents
Culture Documents
(June-99)
ii
Contents
iii
Contents
Preface
ix
Ganymede Software Inc. End-User License Agreement, Console Software........................... ix
Introductory Material............................................................................................................. xi
Trademarks..........................................................................................................................xii
Welcome
1
What You Need to Know ....................................................................................................... 1
Whats New in Chariot 3.1 ..................................................................................................... 1
New Functions in Chariot 3.1.................................................................................... 1
Usability Improvements ............................................................................................ 2
Version 3.1 Compatibility Considerations .................................................................. 2
About This Manual ................................................................................................................ 3
Conventions in This Manual................................................................................................... 4
Chariot Product Types........................................................................................................... 4
Use Our Extensive Online Help ............................................................................................. 4
Introducing Chariot
Installing Chariot
11
19
iv
33
47
Contents
111
117
133
vi
Troubleshooting
153
Contents
vii
163
Index
165
viii
Preface
ix
Preface
Ganymede Software Inc. End-User License Agreement, Console
Software
CHARIOT CONSOLE SOFTWARE
Grant of License. Ganymede is licensing (not selling) the enclosed software (the Software) to you in object
code form. Subject to the terms of this Agreement, you have the non-exclusive, non-transferable right to do the
following: (a) install the Software on a single computer (the Designated CPU); (b) to use and operate the
Software on the Designated CPU in connection with the platform set forth with the accompanying Software, to
run up to the number of simultaneous tests specified on the accompanying invoice for the license fee; (c) make
ONE copy of the Software for backup and archival purposes, provided that you also keep the original copy of
the Software in your possession; and (d) use the documentation contained in this package (the
Documentation) during the term of this Agreement in support of your use of the Software.
Protection of Software. You agree to take all reasonable steps to protect the Software and Documentation
from unauthorized copying or use. The Software and Documentation represent and contain certain copyrighted
materials, as well as trade secrets and other valuable proprietary information of Ganymede and/or its licensors.
The source code and embodied proprietary information and trade secrets are not licensed to you, and any
modification, addition or deletion is strictly prohibited. You agree not to disassemble, decompile, or otherwise
reverse engineer the Software, or examine network flows or related flow methodology employed by the
Software, in order to discover the source code or other proprietary information and trade secrets contained in
the Software.
Restrictions. You agree that you may not: (a) use, copy, merge, or transfer copies of the Software or the
Documentation, except as specifically authorized in this Agreement; (b) use the backup or archival copy of the
Software (or permit any third party to use such copy) for any purpose other than to replace the original copy in
the event that it is destroyed or becomes defective; or (c) rent, lease, sublicense, distribute, transfer, modify, or
timeshare the Software, the Documentation or any of your rights under this Agreement, except as expressly
authorized in this Agreement.
Ownership. Ganymede and its licensors own all rights of authorship, including copyright, in and to the
Software and the Documentation. Ganymede continues to own the copy of the Software contained in this
package and all other copies that you are authorized by this Agreement to make (all such authorized copies
being expressly included in the term Software, as used in this Agreement). You shall own only the magnetic
or other physical media on which the Software is recorded. Ganymede and/or its licensors reserve all rights
not expressly granted to you in this Agreement.
Term. If the Software and Documentation are being licensed to you for evaluation purposes, this Agreement
will be effective for fifteen (15) days, beginning on the date you install the Software on the Designated CPU, or
such earlier date as you destroy or return the Software and Documentation. Upon your payment in full of the
applicable license fee, this Agreement will be effective until terminated. You may terminate this Agreement by
destroying or returning the Software and Documentation and all copies thereof. This Agreement will also
terminate if you fail to comply with any term or condition of this Agreement. You agree, upon any such
termination by Ganymede, to destroy all copies of the Software and Documentation or return them, postage
prepaid, to Ganymede at the address set forth below. Except as provided in the following section, returning the
Software to Ganymede following the opening and/or use of the Software will not entitle you to a refund.
LIMITED WARRANTY
Compatibility. The Software is only compatible with certain operating systems. THE SOFTWARE IS NOT
WARRANTED FOR NON-COMPATIBLE SYSTEMS. Please consult the specifications contained in the
accompanying user Documentation for more information concerning compatibility.
Magnetic Media and Documentation. Ganymede warrants that if the magnetic media or Documentation are
in a damaged or physically defective condition at the time that the license is purchased and if they are returned
to Ganymede (postage prepaid) within 90 days of purchase, Ganymede will provide you with replacements at
no charge.
Software. Ganymede warrants that if the Software fails to conform substantially to the specifications set forth
in the Documentation and if the non-conformity is reported in writing by you to Ganymede within 90 days
from the date that the license is purchased, Ganymede will, at Ganymedes option, either remedy the nonconformity or offer to refund the license fee to you upon return of all copies of the Software and Documentation
to Ganymede. In the event of a refund, this Agreement shall terminate.
GANYMEDE MAKES NO OTHER REPRESENTATIONS OR WARRANTIES, EITHER EXPRESS
OR IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, GANYMEDE MAKES NO
REPRESENTATIONS OR WARRANTIES OF MERCHANTABILITY, TITLE OR FITNESS FOR
ANY PARTICULAR PURPOSE. IN NO EVENT SHALL GANYMEDE BE RESPONSIBLE FOR ANY
INCIDENTAL OR CONSEQUENTIAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOSS
OF DATA OR LOST PROFITS AS A RESULT OF YOUR USE OF, OR INABILITY TO USE, THE
SOFTWARE, EVEN IF GANYMEDE IS MADE AWARE OF THE POSSIBILITY OF SUCH
DAMAGES.
This limited warranty gives you specific legal rights. Some states do not allow limitations on how long an
implied warranty lasts, or on incidental or consequential damages, so the above limitations may not apply to
you. You may also have other legal rights, which vary from state to state.
Limitation of Remedies. Ganymedes entire liability and your exclusive remedy under this Agreement are
limited to correction of defects, replacement of the magnetic media containing the Software or refund of the
license fee, at Ganymedes option.
Responsibilities of Licensee. As a licensee of the Software, you are solely responsible for the proper
installation and operation of the Software in accordance with the instructions and specifications set forth in the
Documentation. Ganymede shall have no responsibility or liability to you, under the limited warranty or
otherwise, for improper installation or operation of the Software. Any output or execution errors resulting
from improper installation or operation of the Software shall not be deemed defects for purposes of the
limited warranty set forth above.
GENERAL PROVISIONS
Governing Law. This Agreement shall be governed by and construed in accordance with the laws of the State
of North Carolina, except as to copyright and trademark matters governed by United States Laws and
International Treaties. This Agreement shall inure to the benefit of Ganymede, its successors and assigns.
This Agreement is deemed entered into in Wake County, North Carolina.
Preface
xi
Entire Agreement. This Agreement sets forth the entire understanding between you and Ganymede with
respect to the subject matter hereof. This Agreement may be amended only in a writing signed by Ganymede
and by you. Nothing contained in any purchase order, acknowledgment, invoice or other form submitted by
you in connection with the license of the Software shall amend or affect the provisions of this Agreement. NO
VENDOR, DISTRIBUTOR, DEALER, RETAILER, SALES PERSON OR OTHER PERSON IS
AUTHORIZED TO MODIFY THIS AGREEMENT OR TO MAKE ANY WARRANTY,
REPRESENTATION OR PROMISE WHICH IS DIFFERENT THAN, OR IN ADDITION TO, THE
REPRESENTATIONS OR PROMISES OF THIS AGREEMENT.
Export. Export of the Software and the Documentation outside of the United States is subject to the Export
Administration Regulations of the Bureau of Export Affairs, United States Department of Commerce. In the
event you desire to export the Software outside the United States, the Software shall at all times remain subject
to the terms of this Agreement, and you agree to be responsible, at your own expense, for complying with all
applicable regulations governing such export. Ganymede makes no warranty relating to the exportability of the
Software to any particular country.
Waiver. No waiver of any right under this Agreement shall be effective unless in writing, signed by a duly
authorized representative of Ganymede. Failure to insist upon strict compliance with this Agreement shall not
deem to be a waiver of any future right arising out of this Agreement.
Severability. If any provision of this Agreement is held by a court of competent jurisdiction to be invalid or
unenforceable, such provision shall be fully severable, and this Agreement shall be construed and enforced as if
the illegal, invalid or unenforceable provision had never been a part of this Agreement.
If you have any questions concerning this Agreement, please contact:
Ganymede Software Inc.
1100 Perimeter Park Drive, Suite 104
Morrisville, North Carolina 27560
U.S.A.
Telephone: 919-469-0997
Facsimile: 919-469-5553
Introductory Material
All examples with names, company names, or companies that appear in this manual are imaginary and do not
refer to, or portray, in name or substance, any actual names, companies, entities, or institutions. Any
resemblance to any real person, company, entity, or institution is purely coincidental.
Ganymede Software may have patents and/or pending patent applications covering subject matter in this
manual. The furnishing of this document does not give you any license to these patents.
Printed in the United States of America.
Trademarks
CHARIOT is a federally registered trademark of Ganymede Software Inc., registration number 1,995,601.
GANYMEDE and GANYMEDE SOFTWARE are federally registered trademarks of Ganymede Software
Inc., registration number 2,053,321. Pegasus is a trademark of Ganymede Software Inc.
IBM, IBM PC, and OS/2 are registered trademarks of International Business Machines Corporation. Intel is a
registered trademark and 80386, 386, 486, and Pentium are trademarks of Intel Corporation. Microsoft and
Windows NT are registered trademarks, and Windows is a trademark of Microsoft Corporation.
Other product names mentioned in this manual may be trademarks or registered trademarks of their respective
companies and are the sole property of their respective manufacturers.
Welcome
Welcome
Welcome to Chariot, by Ganymede Software Inc. Chariot is the standard for testing and monitoring the
performance of client/server networks. Chariot can help you:
Chariot version 3.1 is designed to be used with version 3.3 of the Performance Endpoints.
You are familiar with basic concepts and terminology for the operating system youre using at the console
(Microsofts Windows 95, 98, or NT).
You are familiar with the network protocols supported by the console, and with other network protocols
you may run between endpoints in a test. You need to understand setting up programs that use those
application programming interfaces (APIs).
Usability Improvements
Firewall Options tab
To make testing through firewalls easier, weve added several new options to help use Chariot to test
through a firewall. These options are located on the Firewall Options tab of the User Settings notebook
(formerly the Reporting Ports tab). See Changing Your Firewall Options in the Operating the Console
chapter on page 54.
New Legends on Some Graphs
We have added some new legends on the graphs to improve the readability and usability of the graphs.
Throughput Units in Gbps
You can now show throughput units in Gbps. Select this option on the Throughput Units tab of the User
Settings notebook. See Changing Your Throughput Units in the Operating the Console chapter on page
52.
Welcome
In previous versions of Chariot, the following scripts had invalid transaction loop counts:
Tips for Testing on page 133 describes topics to help improve the test results you get from Chariot. For
example, it discusses how long a test should run, using short and long connections, and things to avoid.
Troubleshooting on page 153 describes the Chariot error logs, common problems, and how to resolve
problems with communication stacks.
Service and Support on page 164 describes how to work with Ganymede Software if you encounter problems.
The Courier font indicates a command to enter, or the output of a computer program.
underlined text
Underlined text indicates a hyperlink to another book or chapter, or to an Internet URL.
The Home button, at the top of each chapter, takes you to our online library. From there you can select
any of our online books.
The Index button, at the top of each chapter, takes you to the index for the current online book. You can
scroll through the alphabetized index, or you can use your browser's text search feature (click Edit/Find in
Welcome
version 4.x or later of Microsofts Internet Explorer or Netscapes Navigator/Communicator) to move more
quickly through the index.
Each book has a table of contents at the left. Click on any chapter name to read it.
Each chapter has a table of contents at the top. Click on a section name to jump to it. Use your browsers
scroll bars to move through the text.
The Top buttons, at the right of each section, take you to the start of the current chapter.
The Chapter buttons, at the bottom of each chapter, take you to the next or previous chapter.
Introducing Chariot
Introducing Chariot
Chariot is designed to measure the performance between pairs of networked computers. Different kinds of
distributed applications can be emulated, and the results of running the emulated applications can be captured
and analyzed.
You operate Chariot from its console, a program with a graphical user interface that lets you create and run
tests. To create a test, you determine which computers to use and the type of data traffic you want to run
between them. Chariot refers to each of these computers as a Performance Endpoint, or simply endpoint. An
endpoint pair comprises the network addresses of the two computers, the network protocol to use between
them, and the type of application to emulate. Tests can include just one endpoint pair, or be more complex,
running hundreds of endpoint pairs using a mix of network protocols and application flows.
For each endpoint pair, you select an application script which emulates the application youre considering.
Endpoints use the script to create the same data flows that an application would, without the application having
to be installed. Chariot comes with a set of predefined application scripts, providing standard performance
benchmarks and emulating common end-user applications.
The operation of Chariot is centered at its console. For example, all test files and scripts are stored at the
console and distributed to the endpoints. The endpoint software is generally installed once and rarely touched.
A Brief Walkthrough
Here is an example of how you use Chariot.
1.
2.
3.
Create the first endpoint pair, by specifying the network addresses of Endpoints 1 and 2, and the protocol
to use between them.
This can be done by explicitly typing in their network addresses, or by selecting from a list of network
addresses saved from previous tests.
4.
5.
6.
7.
8.
Introducing Chariot
The key flows in the above picture are numbered and described below.
1.
A test is created at the console, and the user presses the Run button. The console sends the setup
information to Endpoint 1, including the application script, the address of Endpoint 2, the protocol to use
when connecting to Endpoint 2, the service quality to use, how long to run the test, and how to report
results.
2.
Endpoint 1 keeps its half of the application script, and forwards the other half to Endpoint 2. When
Endpoint 2 has acknowledged it is ready, Endpoint 1 replies to the console. When all endpoint pairs are
ready (in this example, theres just one pair), the console directs them all to start.
3.
The two endpoints execute their application script, with Endpoint 1 collecting the timing records.
4.
Endpoint 1 returns timing records to the console, which displays the results.
In the Main window, press the New button to get to a Chariot Test window. The Test window lets you add
pairs of endpoints to a test.
Click on the Add pair icon in the toolbar (two squares connected by a parallel line; it should be the first
icon that is not grayed). The Add an Endpoint Pair dialog appears.
10
Press the OK button to close the Add an Endpoint Pair dialog. You have set up a test with one pair.
Run the test, by clicking on the Run icon in the toolbar (a stick figure and a green sign).
The test runs using your computer as the console and as the endpoints. The IP address of 127.0.0.1
eliminates the need to know any other computers TCP/IP addresses. The throughput and response time
numbers you get are constrained by the speed of your CPU, since the whole test runs completely inside
your computer.
Installing Chariot
Installing Chariot
Before you begin working with Chariot:
1.
Check the Chariot package to make sure you have all the contents. For details on the contents of the
package, see the The Chariot Package on page 11.
2.
Make sure you have the required hardware and software to run Chariot.
3.
4.
Decide which computers youll use as the console and the endpoints.
5.
Make sure you have a Web browser installed on your chosen console computer.
6.
7.
Ensure the console and the endpoint programs are configured correctly for the network protocol(s)
they are using.
A Chariot CD-ROM
A printed Messages and Application Scripts manual containing information on the application scripts
and a listing of all messages
Make sure you have everything listed here. Please contact us if anything is missing; see the Ganymede
Software Customer Care chapter on page 163 for information about contacting us.
11
12
See the Configuring Chariot in Your Network chapter on page 19 for more information on ways to find
the network addresses of the computers and check out the connections between them. Complete the steps
in that chapter before attempting to run Chariot.
When you install the console, you can choose to install the endpoint on the same computer. This lets you
get started with Chariotrunning tests within a single computerbefore you run it across a real network.
An x86 computer capable of running Microsofts Windows NT well. This implies a CPU such as an
Intel 80386, 80486, a member of the Pentium family, or equivalent. A Pentium or better is
recommended.
A CD-ROM drive on the computer you are installing the Chariot Console.
A Web browser. Because Chariot online help is now in HTML format, you need a Web browser to
view the help. We recommend version 4.x (or later) of either Netscape Navigator or Microsoft
Internet Explorer.
For best results, we recommend using a color palette of at least 256 colors.
We also recommend that you get up-to-date with the latest Windows NT service levels. See the
Troubleshooting chapter for more information on how to get the latest patches and Service Packs.
You also need compatible network protocol software:
for APPC, one of the following
Three APPC stacks for Windows NT are supported by the Chariot console.
IBM Personal Communications AS/400 and 3270 for Windows NT version 4.11 or higher (its
short name is PCOMM for Windows NT): runs on x86 computers where its communications
APIs are installed.
IBM Communications Server for NT version 5.0 or higher: runs only on the server computer of
Communications Servers split stack model.
Microsoft Windows NT SNA Server for x86: runs on either a client or the server computer of
SNA Servers split stack model. We recommend version 4.0 of SNA Server with its latest
service packs. At a minimum, you need Microsoft SNA Server version 2.11 with Service Pack 2.
Versions prior to 2.11 Service Pack 2 have bugs which youll encounter quickly.
Installing Chariot
for IPX/SPX
SPX software is provided as part of the network support in the Windows NT operating system.
Microsoft improved their SPX support with SPX II. SPX II is also present on Novell NetWare 4.x.
SPX II allows a window size greater than 1 and buffer sizes up to the size the underlying transport
supports.
The SPX protocol supplied by Microsoft in Windows NT 4.0 is subject to slowdowns when running to
itself, that is, with loopback.
for RTP, TCP, or UDP
TCP/IP software is provided as part of the network support in the Windows NT operating system.
Microsofts Service Pack 3 for Windows NT 4.0 fixes several TCP/IP bugs and is required for IP
Multicast; it is strongly recommended for users of Windows NT 4.0. We recommend always using
the latest service pack. See "Microsoft's WinSock 2 Software" on page 30 in the Configuring Chariot
in Your Network chapter for more information.
Quality of Service (QoS) support for RTP, TCP, and UDP is part of Microsoft Windows 98 and
Windows 2000 (now in beta). At the time of this writing, we are testing with Windows 2000 beta 3.
Its QoS support is much improved over Windows 98; it supports DiffServ as well as RSVP, and has a
number of bug fixes not in Windows 98.
IP Multicast and QoS are not required to operate the Chariot console. However, if you plan to run IP
Multicast or QoS tests using an endpoint installed on the same computer as the console, you must be
running an appropriate TCP/IP stack.
An x86 computer capable of running Microsofts Windows 95 or 98 well. This implies a CPU such
as an Intel 80386, 80486, a member of the Pentium family, or equivalent. A Pentium or better is
recommended.
A CD-ROM drive on the computer you are installing the Chariot Console.
A Web browser. Because Chariot online help is now in HTML format, you need a Web browser to
view the help. We recommend version 4.x (or later) of either Netscape Navigator or Microsoft
Internet Explorer.
For best results, we recommend using a color palette of at least 256 colors.
We also recommend that you get up-to-date with the latest Windows 95 or 98 service levels. See the
Troubleshooting chapter for more information on how to get the latest patches and Service Packs.
You also need compatible network protocol software:
for APPC
IBM Personal Communications AS/400 and 3270 for Windows 95 version 4.11 or higher (its short
name is PCOMM for Windows 95) is supported by the Chariot console. It runs on x86 computers
where its communications APIs are installed.
13
14
The first dialog, after Setup has loaded itself, asks you to enter your name and your company name.
The Select Destination Directory dialog lets you select where to install the console. We recommend
installing it on a local hard disk of the computer youre using. If you install on a LAN drive, the
additional network traffic may influence your test results. The default directory is \GANYMEDE\CHARIOT,
on your boot drive.
The console installation next goes through the following steps:
1.
2.
Installing Chariot
After the installation is complete, you can choose to install the endpoint. See the Performance Endpoints
manual for more information on installing the endpoints.
The Chariot installation checks to see that the Microsoft C runtime files already on your computer are at a
service level at least as recent as those that installed with Chariot. If these files are down level, Chariot
replaces them with newer copies.
After installing the files, the installation program creates a Chariot folder. Inside of that folder are three
icons: Chariot Console, Chariot Help, and Readme. The console can now be started, by clicking on the
Chariot Console icon.
For retail versions, you must register the product with the Ganymede Software Registration Center to
receive an authorization key. Contact information for the Ganymede Software Registration Center
can be found on the inside of your Chariot CD-ROM case. You may use the product in evaluation
mode for 15 days while you are requesting your authorization key. (See Changing Your Registration
Number on page 54 in the Operating the Console chapter for information on the console dialog that
lets you make changes after installation.)
Upon contacting the Ganymede Software Registration Center, you will be asked for a registration
number and a license code. The registration number can be found on the Registration Card you
received upon purchase. The license code is provided on the initial screen displayed when starting
Chariot. After providing this information to the Registration Center, you will receive an
authorization key that will enable the retail version of Chariot.
For evaluation versions, leave the Registration number field empty and press the OK button. The
evaluation period for Chariot is 15 days. Each time you start Chariot, the dialog displays the number
of days remaining in the evaluation period. After the evaluation period is over, you will not be able to
start Chariot without entering a registration number and authorization key. If you later purchase
Chariot, obtain a new authorization key, as described above. You do not have to reinstall Chariot
after you purchase the product.
Actions During Windows Installation
Heres what we do during the installation steps. Lets say you install Chariot into the directory named
GANYMEDE\CHARIOT. Chariot installation creates directories with the following structure:
GANYMEDE\CHARIOT
GANYMEDE\CHARIOT\SCRIPTS
GANYMEDE\CHARIOT\TESTS
GANYMEDE\CHARIOT\HELP
15
16
From the Windows Start menu, select the Settings submenu, and the Control Panel menu item.
2.
From the Control Panel, double-click on the Add/Remove Programs icon. The Add/Remove
Programs Properties dialog is shown.
3.
Highlight Ganymede Software Chariot and press the Add/Remove button. The Confirm File
Deletion message box is shown.
4.
To remove Chariot, press the Yes button. The Uninstallation program uninstalls Chariot.
5.
When the uninstall is complete, press the OK button. The Add/Remove Programs Properties dialog
is shown. Press the OK button.
All system registry entries are removed when uninstalling. However, user registry entries are not
removed. For example, if you customize information on the user settings notebook and then uninstall
Chariot, your user settings are restored when you reinstall Chariot. To remove the registry entries, open
REGEDIT, highlight the key HKEY_CURRENT_USER\Software\Chariot, and press the DELETE key.
All registry entries will be removed.
2.
Open REGEDIT.
3.
4.
Installing Chariot
17
18
APPC
IPX
RTP
SPX
TCP
UDP
You can use a different protocol to get from the console to Endpoint 1 than you use between a pair of
endpoints. Other protocols, to be supported in the future, may be shown when you see the list of available
network protocols in the console listboxes. You can create tests using these protocols, but the tests wont
run unless the network protocols are supported on the respective endpoints.
The Edit Console to Endpoint 1 dialog lets you choose to Use Endpoint 1 to Endpoint 2 values when
connecting from the console to Endpoint 1. (Theres a similar checkbox in the User Settings notebook.)
If youre using a datagram protocol and youve checked this box, the console uses the corresponding
connection-oriented protocol for its connection to Endpoint 1. So, if you choose the IPX protocol for the
connection between Endpoint 1 and Endpoint 2, the default is to use SPX between the console and
Endpoint 1. Similarly, if you choose the UDP or RTP protocol for the connection between Endpoint 1 and
Endpoint 2, the default is to use TCP from the console to Endpoint 1.
Thus, here are the protocols supported between the consoles and Endpoint 1:
APPC
SPX
TCP
Chariots support of datagram protocols, such as IPX and UDP, can emulate the behavior of datagram
applications that must provide reliable delivery of data or RTP can only be used to emulate multimedia
applications which send a stream of data. For datagram protocols, reliable delivery between endpoints is
not provided by the protocol, but by the endpoint itself. See the Working with Datagrams and Multicast
Support chapter on page 33 for more information.
19
20
Chariot, as a set of network applications, expects your network hardware and software to be set up and
running correctly. This chapter guides you through verifying this at the console. Be sure to read the
corresponding chapter for each endpoint youre using in your tests. Here are the key tasks:
1.
Determine the network addresses of the computers to be used as the console and the endpoints in
Chariot tests,
2.
3.
Lets look at each of the protocols to see how to accomplish these tasks:
APPC Configuration
This section provides selected information about configuring APPC. If you are new to configuring APPC,
start with the guidance provided by the APPC network software youre using. Chariot consoles support:
APPC on Windows NT with IBM Communications Server or Personal Communications (the second
product is also known as PCOMM)
IBM has created a thorough (but aging) guide to setting up APPC across a variety of its platforms. This
guide is called the MultiPlatform Configuration Guide (you may hear it called MPCONFIG), and is
available for download from the Internet and from CompuServe. Here are the file names to look for:
MPCONT.ZIP
MPCONP.ZIP
MPCONB.ZIP
MPCONF.ZIP
Start the SNA Node Operations program by either running PCSNOPS.EXE from a command prompt
or by clicking on the icon.
The first panel displayed should be the Node panel which displays a value entitled FQCP Name. If
this is not visible, select View...Select Resource Attributes and select it for viewing. A default fullyqualified LU name is automatically configured and it has the exact same name as the FQCP Name
shown in this panel.
A fully-qualified LU name is the easiest network address to use with Chariot. Although you can define
multiple LUs at one computer, the default LU name (determined above) is the one on which the endpoint
listens for a connection from the console.
21
22
Links to Windows NT
(IP, IPX, Named Pipes connections)
...
On each of the Client computers, you install Microsofts Windows NT Client for SNA Server. The
endpoint for Windows NT programs can then use the APPC protocol on these computers, in addition to
other network protocol support that may be installed, such as TCP or SPX. The APPC calls at the
endpoints are encapsulated and forwarded to the SNA Server, which executes the calls and returns the
results to the endpoint program at the Client.
Chariot can run APPC tests with these different combinations of connections:
1.
Client to Client
2.
3.
4.
5.
Microsofts SNA Server is more fragile than we would have expected. The last two of these connections
are much more robust than the first three. Dont expect to run a Chariot test from Client to Client with
more than about 3 pairs, and not for more than about 30 seconds. (Client connections over TCP/IP are
slightly more robust than those over IPX or Named Pipes.) The test can fail in a variety of ways,
including gradual slowdowns, abends, and lockups. Errors may be logged at the SNA Server. The SNA
Server and/or Clients may need to be restarted, or the computer may need to be rebooted.
All the LUs reside at the SNA Server. The person administering the SNA Server creates a list of LUs
there. An LU list consists of one or more LUs definitions, each with a different LU name. Generally, any
of the Clients can use any of the LUs in its list. This means, in a normal setup, youre not sure what LU a
Clients application program is using at any time. This makes it difficult to execute reasonable Chariot
performance tests, where youve provided LU names for the Endpoint 1 and Endpoint 2 addresses.
For example, you might want to create a test that runs from MVS to Client C. When the endpoint
program is started (that is, when Chariots Windows NT endpoint service starts) on Client C, it begins
making APPC calls to listen for incoming tests. When it makes its first APPC call, its SNA Server
assigns it an LU from the list. Since endpoint programs may be running on multiple Clients, youd like to
precisely (and repeatably) know which LU is being used by Client C.
Indeed, with this method, Chariot might be connected to one computer during the first part of the
initialization phase but be connected by SNA Server to another computer for the rest of initialization,
leading to serious errors returned by the endpoints.
There appears to be only one method to associate a specific LU from the list of available LUs to a specific
Client computer, letting you create correct and repeatable tests. This involves putting Define TP entries
into the Windows NT registry on each endpoint computer defining the specific LU name that the endpoint
wants to be associated with. This is how Chariot operates.
Chariot makes Define TP entries during installation, when you are prompted for the APPC LU alias.
They can be made at a later time, using the SETALIAS Chariot command-line program.
Set up which modes will be automatically defined on these connections: select any remote partner and
view its properties, select the Partners... button, select the Modes... button, select which mode
names you would like to use and ensure the Enable Automatic Partnering box is Xd for each.
2.
Set up which partners you would like to automatically partner: select the remote LUs, view their
properties, and ensure the Enable Automatic Partnering box is Xd for each.
3.
Verify that the Enable Automatic Partnering box is Xd for each local LU.
To verify that the partnering is correct, select the Partners... button when viewing remote LU properties;
a full list of all LUs that are partnered is shown.
23
24
When local LUs are partnered with remote LUs, local LUs running on SNA Server Client computers can
initiate a connection to a remote computer. However, since SNA Server is not an APPN
implementation, if a remote computer wishes to initiate a connection to one of the LUs defined in SNA
Server, it needs to have configuration information in its own APPC configuration program, defining a
connection to the SNA Server computer for each LU to which it will connect.
This should contain the installed default value CHARIOT and can be changed to another unique LU
alias of your choosing. This is the LU alias the console program uses. The consoles LU alias must be
different from the LU alias that the endpoint programs uses (even though they may be running on the
same computer).
Description
#INTER
#INTERSC
#BATCH
#BATCHSC
For many tests, these modes are sufficient. However if you are trying to emulate a particular APPC
application, you should select the same mode name that it uses.
These pre-defined modes are defined with session limits of 8. This means that you can only have 8
sessions at a time, between a pair of computers, using the same mode name. If youre attempting to run
more than 8 sessions using the same mode between a pair of computers, we recommend creating a new
mode on both computers, with a session limit larger than 8.
Substitute the lu_name and mode_name with the actual names you determined in the steps above. (In
Chariot, the mode name is known as the service quality.) If APING works, it shows a table of timing
information. This endpoint should be ready for APPC testing with Chariot. Continue verifying
connections to the other endpoints in your test.
Although APING is packaged with most APPC software platforms, only a few automatically configure the
receiving side. If you run APING and get a return code of TP_NOT_AVAILABLE or
TP_NOT_RECOGNIZED, APING is probably not configured on the other computer. The good news is
that you did get a connection and that APPC is set up and running on both computers. Thus, even though
APING was not configured, you are able to connect and converse with the remote computer, so Chariot
should be able to use the remote endpoint for APPC testing.
If you get any other APPC return code, you probably have a configuration problem somewhere. You
should correct this before starting to run Chariot.
Be sure to test the APPC connections between the console and each of the Endpoint 1 computers youre
reaching via APPC. Also test the APPC connections between each Endpoint 1 and Endpoint 2 in pairs
youre testing with APPC. Dont test the connection from the console to Endpoint 2, since Chariot
consoles do not contact Endpoint 2 computers directly.
25
26
APPC TP Name
APPC applications use an LU name to decide which computer to connect to in a network. They use a TP
name to decide which application program to connect to within a computer.
Chariot uses the string GANYMEDE.CHARIOT.ENDPOINT as its TP name. This TP name is used
when communicating with endpoints via an APPC connection.
A list of recently cached IP addresses is shown, along with their MAC addressesif they are LANattached.
Its tedious to enter IPX addresses when adding new endpoint pairs. When using the IPX or SPX protocol
in your tests, Chariot can maintain an easy-to-remember alias in the Edit Pair dialog. You can set up the
mapping once, and use the alias names ever after. From the Tools menu, select the Edit IPX/SPX
Entries menu item. The underlying file, named SPXDIR.DAT, is like the HOSTS file used in TCP/IP, or
the LU alias definitions offered with APPC.
For Windows 95, 98, and NT, Chariot makes WinSock version 1.1 Sockets-compatible calls when using
the IPX or SPX network protocol. For NetWare, Chariot makes calls to the TLI API when using IPX or
SPX.
If your IPX software support is configured correctly, your output looks like the following (this output is
taken from Windows NT 4.0):
NWLink IPX Routing and Source Routing Control Program v2.00
net 1: network number 00000002, frame type 802.2, device AMDPCN1
(0207011a3082)
The 8-digit network number is shown first; here, its 00000002. The 12-digit node ID is shown in
parentheses at the end; here its 0207011a3082, which is our Ethernet MAC address. Thus, the IPX
address to be used in tests is 00000002:0207011a3082.
If your TCP/IP stack is configured correctly, your output looks like the following (this output is taken from
Windows NT 4.0):
Windows NT IP Configuration
Ethernet adapter AMDPCN1:
IP Address. . . . . . . . . : 44.44.44.3
Subnet Mask . . . . . . . . : 255.255.255.0
Default Gateway . . . . . . : 44.44.44.254
Its local IP address is shown in the first row; here its 44.44.44.3.
27
28
You can also find your IP address using the graphical user interface. Select the Control Panel folder, and
double-click on the Network icon. The installed network components are shown. Double-click on
TCP/IP Protocol in the list to get to the TCP/IP Configuration. Your IP address and subnet mask are
shown.
To determine a Windows NT computers local host name, enter the following command:
HOSTNAME
where d: and path are the drive and path where you installed Windows NT.
Users of TCP/IP on other operating systems may be familiar with the NETSTAT command:
NETSTAT -N
This displays a line of text for each active connection. The local IP address is in the second column of
each row.
You can also find and change your IP address using the graphical user interface. From the Start icon,
select Settings. Select the Control Panel folder, and double-click on the Network icon. The installed
network components are displayed.
Double-click on TCP/IP to get to the TCP/IP Properties. Select the IP Address page to see or change
your local IP address. Select the DNS Configuration page to see or change your domain name. If the
DNS Configuration is empty, avoid using domain names as network addresses. Use numeric IP addresses
instead.
The initialization flows before the endpoints begin executing the test do not use QoS; QoS is not
supported between the console and Endpoint 1. The QoS support begins when Endpoint 1 and Endpoint 2
start executing the script.
If a QoS template is predefined at the endpoints, it is defined there as part of the software known as the
"Windows QoS Service Provider." The following templates are provided with Windows 98 and Windows
2000 beta 3; we recommend using one of them, if it makes sense for the application you are emulating.
QoS Template Name
Description
G711
G723.1
G729
H261CIF
common video codec used with image sizes of 352 x 288 pixels
H263CIF
common video codec used with communications channels that are multiples of 64 Kbps
and image sizes of 352 x 288 pixels
H261QCIF
common video codec used with image sizes of 176 x 144 pixels
H263QCIF
common video codec used with communication channels that are multiples of 64 Kbps
and image sizes of 176 x 144 pixels
If you need a QoS template different from these, you can create your own at the Chariot console; templates
are distributed to the endpoints as part of the initialization of a test run. See Working with Quality of
Service (QoS) Templates on page 57 in the Operating the Console chapter for information on creating
QoS templates at the Chariot console.
QoS templates are saved in CSV form in the SERVQUAL.DAT file at the consolenot in the test file. (The
SERVQUAL.DAT file is located in the directory where the Chariot console is installed). If you want to run
tests at a different Chariot console using a custom QoS template, take a copy of your SERVQUAL.DAT file
and extract the portions you need.
To use Quality of Service today, the RSVP protocol must be enabled on the router interface. For
information on how to enable the RSVP protocol, refer to the documentation for your router.
We have tested QoS with Cisco routers running IOS version 11.3.5. If you're testing QoS with Cisco
routers, we recommend that you use this version or later of the IOS software. If you do not have version
11.3.5, the following bug fixes are necessary for QoS testing.
CSCdk28283: non-SBM aware routers running only RSVP should not process SBM messages.
CSCdk27983: RSVP should not drop messages with 10xxxxxx class objects.
CSCdk27475: RSVP should not drop policy object in RESV message.
CSCdk29610: RSVP should not reject RTEAR message without flowspec.
CSCdk38002: Router crashes while forwarding RSVP Path-Err message.
CSCdk38005: Router does not forward RESV messages with Guaranteed service flowspec. This fixes
the main problem we encountered, which caused tests with Guaranteed service types to not work at
all.
We have found in our testing that if a router in the path between two endpoints rejects the QoS request
after the test has started running, the endpoint is not aware of this and the test will continue to run. The
error messages are returned asynchronously from the router and the traffic will be treated as Best Effort.
This situation can occur if a router in the path does not have the necessary bandwidth to fulfill the request.
29
30
Replace the xs with the IP address of the target computer, that is, the computer youre trying to reach.
On Windows NT, if Ping returns a message that says Reply from xx.xx.xx.xx:..., the Ping worked. If it
says Request timed out., the Ping failed, and you may have a configuration problem, a network problem,
or the target computer may not even be powered on.
For more details about the Ping command, enter:
PING -?
If youre unable to reach the target computer using Ping, the TRACERT command may help you
determine how far packets can get through the network. TRACERT tries to find whether each hop in the
IP network can be reached, on the way to the target computer. Be aware that TRACERTs results arent
necessarily repeatable, since a different route can be taken by each packet thats sent.
Heres a brief summary of the four versions in active rotation here at this writing. They are listed in order
from most recent to oldest.
WinSock 2 Version
Description
Supports IP Multicast, and QoS with RSVP and ATM. Supports hundreds of
simultaneous connections. This is the latest and best that weve seen.
Windows 98
Supports IP Multicast, and QoS with RSVP (but no ATM signaling). Supports about 50
simultaneous connections. Much improved over Windows 95, but not as good as
Windows 2000 beta 3. Watch for upcoming fixes and improvements.
standalone Windows 95
WinSock 2 package
Windows NT 4.0
service pack 3
31
32
33
34
Datagrams work like two people exchanging letters via the postal service: theres no guarantee letters
arrive in order or at all. Without any additional work, this situation is unacceptable for many
applicationsthose which require reliable delivery of data. If they use datagrams, those types of
applications must follow an approach that ensures the data is properly exchanged. Such an approach
typically requires the use of:
Acknowledgments, to let the sender know the partner has received data.
Timers, so the sender can retransmit its data if it doesnt receive an acknowledgment from the partner
soon enough.
A flow control mechanism, to prevent the sender from flooding its partner with too much data.
Other applications, such as multimedia applications, do not require acknowledgments or timers. They
typically send data in a steady stream at a rate that does not flood the partner. They can usually
accommodate some data that is out of order or lost due to network congestion. Applications such as video
applications, audio applications, and stock ticker applications do not need confirmation that the data has
been received. Chariots multimedia support lets you emulate these types of applications. See
Understanding Multimedia Support on page 37 for more information.
A windowing scheme is used as the flow control mechanism: a sender sends at most a certain amount
of data before waiting for an acknowledgment from the receiver.
2.
The sender waits for a period of time (the retransmission timeout period) to receive an
acknowledgment from its partner. If the acknowledgment does not arrive in time, the sender
retransmits the window of unacknowledged data.
3.
A window is filled.
All of the data is received for an application script RECEIVE command.
An acknowledgment is not sent immediately after all the data of a RECEIVE is received. Instead, the
acknowledgment is sent when the next API call is issued (unless the next command is RECEIVE, in
which case the endpoint keeps receiving until the window is full).
This is a common way for peer reliable datagram applications to keep traffic to a minimum. It also
means that the endpoint sends one less datagram when a RECEIVE is followed by a SEND.
4.
If the receiver detects lost datagrams, it sends an acknowledgment indicating how much of the
window was received in sequence, thereby letting the sender retransmit what wasnt received.
When an endpoint sends and receives data as fast as possible, the endpoints timeout mechanism is
sufficient to detect when datagrams are lost or late. However, if a non-zero SLEEP command is specified
in the application script, the endpoint must send a datagram to the partner endpoint indicating that it will
not be sending or receiving data for the length of time indicated by the SLEEP. The partner endpoint
must acknowledge the condition by sending back its own datagram. These datagrams are counted in the
datagram statistics for the connection.
You should use the application script which most closely matches the application you want to emulate.
For example, a typical datagram application is a file server program (such as NFS, which uses UDP, or
NetWare, which uses IPX). These can be emulated with the File Send Long Connection or File Receive
Long Connection script.
The Packet Blaster scripts are not intended to emulate a frame throwerapplication scripts require
reliable delivery of data (which implies the use of acknowledgments), whereas frame throwers do not. See
the Messages and Application Scripts manual for information on setting the delivery rate for data.
There are some application scripts which do not run over an IPX network with their default values. These
scripts are intended to emulate applications which send large amounts of data using the TCP protocol.
They are not intended to be used between endpoints in an IPX network, because IPX does not support
fragmentation and reassembly of large amounts of data. Note, however, that the scripts will work if you
change the send buffer_size values in the scripts to prevent fragmentation or if the protocol is SPX.
35
36
Window Size
The Window Size is the number of bytes that can be sent to the partner without an acknowledgment.
Window Size imposes flow control. Set this to avoid flooding the network or an endpoint with too many
datagrams. Sending more datagrams than the network can handle results in lost datagrams, and lost
datagrams can result in timeouts and retransmissions, which in turn means poor performance. If the
Window Size is too small, datagrams are much less likely to be lost, but performance wont be as good as
it would be otherwise.
The number of datagrams sent in a window can be calculated by taking the Window Size, dividing by the
scripts send_buffer_size, and rounding this up (endpoints wont send partially full datagrams until the
end of a SEND command). You can enter values in the range from 1 to 9,999,999 bytes for the Window
Size.
Setting this field to a small number causes more frequent checking to occur during SEND commands.
This decreases performance by forcing the sender to wait for acknowledgments and by increasing the
number of data packets flowing through the network. For example, if you set the window size to 4,096,
the sender seeks an acknowledgment from the receiver after each block of 4,096 bytes it has sent. You
can set the value larger, resulting in fewer pauses for acknowledgments, but if the acknowledgment
indicates a failure, a larger number of bytes will have to be re-sent.
If the Window Size is large, more datagrams may need to be retransmitted. In addition, memory usage is
a concern when making the Window Size larger. When a test begins, both endpoints in each connection
using a datagram protocol allocate enough memory to hold a complete window.
If the application being emulated supports datagram parameters, use the same values it is using.
Otherwise, choose a Window Size large enough for several datagrams (or more/less, depending on the
expected load on the network and the individual hosts) to be sent before an acknowledgment is required.
One approach is to begin low and gradually increase until either performance no longer improves or
retransmissions increase.
This parameter should be at least the average round-trip time between the two endpoints, not counting
overhead for processing the datagrams at the endpoints. For example, if the two endpoints are connected
via satellite, use 2 * 270ms plus some overhead depending on the speed of the computers. A value of 10
or 20 milliseconds may be appropriate for two endpoints directly connected on a LAN, such as Ethernet.
Our suggestion is to set the Retransmission Timeout to twice the propagation delay of your network. This
is equivalent to the response time measured by the Inquiry, Long Connection (INQUIRYL) script.
The timeout parameter should be increased to account for the number of pairs expected to use the network
concurrently. A low timeout parameter can lead to many endpoints each retransmitting at the same time,
resulting in yet more congestion of the network.
37
38
Your network administrator can tell you the packet size. The packet size is limited by the type of physical
network: Ethernets packet size is around 1500 bytes, and Token Rings is approximately 4096 bytes.
The packet size also depends on the network software. For example, some versions of IPX dont support
more than 512 bytes, even on an Ethernet.
unicast
broadcast
multicast
In unicast delivery, an application sends data from a single source to a single destination on the network.
An example of unicast delivery is a telephone call (before the invention of three-way calling and
conference calling). This type of communication is between two points and the data (which in this case is
conversation) flows between only these two points.
Broadcast delivery lets you send data to all destinations, regardless of whether or not the receivers want to
receive the data. Radio is an example of broadcast delivery. Radio programs are sent through the air
waves, regardless of whether anyone is listening to the broadcast. The broadcast is accessible by everyone
with a radio set. You do not need to request to receive radio shows.
In multicast delivery, an application sends data to a single address, called a multicast address. The routers
in the network decide where to deliver the data based on whether other applications are listening. A
benefit of multicast is that multiple unicast connections do not have to be set up. Another benefit is that,
unlike broadcasting, the data is only sent to destinations where applications are listening and want to
receive the data. An example of multicast is video and audio conferencing. In this use, members of the
group are subscribed before the conference. During the conference, the video or audio is only received by
the members of the group.
To emulate a multimedia application, select a streaming script. When you run a streaming script in a test,
data is sent in only one direction. Throughput and lost data are calculated by Endpoint 2. If RTP is used,
Endpoint 2 also in addition to lost data, calculates jitter. Because of the nature of unreliable protocols,
lost and out of order data can occur. This does not cause multimedia pairs to fail.
Streaming scripts have a fixed format:
However, you can modify variable values, such as the send_data_rate. See the Messages and Application
Scripts manual.
For pairs running streaming scripts, test results contain information about the data quality. The Lost
Data tab in the Test window lets you see how much data was lost during the run. If you are using RTP,
the Jitter tab in the Test window shows jitter data. Graphs of lost data are associated with the Jitter tab,
so you can graphically view how much or when the data was lost. You can use the Datagram tab to view
the number of lost or out of order datagrams.
Set these parameters to be as similar as possible to the multicast application you are simulating. If you
want to determine what those values should be, here are some general guidelines:
1.
Receive Timeout is the number of milliseconds the endpoint issuing a RECEIVE command waits
before determining that a script has ended. If the data has not been received in this amount of time,
endpoints send a notification to the sender that the data was not received.
Receive Timeout is used for both multimedia pairs and multicast groups. This value is configured on
the Datagram tab of the Run Options notebook.
If you set this number too low and the transmission encounters normal network delays, the receiver
may time out while the data is still transmitting. If you set this number too high, the receiver may
spend unnecessary time waiting for a transmission that has failed.
If Endpoint 2 is using Windows 95, 98, or NT, the minimum receive timeout value is 500
milliseconds.
2.
IP Multicast Time To Live (TTL) controls the forwarding of IP Multicast packets. (See IP
Multicast on page 42 for more information.) Set this TTL value based on how far you want the data
forwarded. Expect to do some experimentation to find the best combination of variable settings.
This field defaults to 1. A value of 1 means that the packet does not leave the senders local subnet.
If you want to route the packet across a router, you must set the value of this field to at least 2. A
general rule is to set the TTL to one more than the number of router hops to the farthest endpoint in
the multicast group.
39
40
We found that with Microsofts TCP/IP stacks on Windows 95, 98, or NT, a TTL value of 0 still lets
multicast packets leave the local host. Thus, if you run a loopback test with one computer, you may
impact the performance of your network, as the packets are broadcast on the local subnet.
See Changing the Run Options on page 78 in the Operating the Console chapter for details on these two
fields.
RTP Configuration
You can support the Real-time Transport Protocol (RTP) for pairs or multicast groups using streaming
scripts. Many of the leading voice and video applications are using RTP as their framework for
communications. RTP is an Internet standard, documented in RFC 1889. You can use RTP in both
unicast and multicast pairs.
RTP is a UDP datagram with an additional 12-byte header. The 12-byte RTP header contains fields such
as payload type, sequence number, and timestamp.
RTP does not provide any mechanism to ensure timely delivery or provide QoS guarantees, but relies on
lower-layer services to provide these. RTP does not guarantee delivery or prevent out-of-order delivery. It
also does not assume that the underlying network is reliable and delivers packets in sequence. The
sequence numbers included in RTP allow the receiver to reconstruct the senders packet sequence.
Sequence numbers might also be used to determine the proper location of a packet, for example in video
decoding, without necessarily decoding packets in sequence.
In addition, RTP is a key component of the H.323 specification. H.323 is a standard for audio, video, and
data across IP networks, such as the Internet. See the Primer on H.323 Series Standard on
http://www.databeam.com/h323/h323primer.htm for more information on H.323.
A benefit of using RTP is you can perform tests to determine the impact of RTP header compression
performed by many routers. With a RTP header of 12 bytes, UDP header of 8 bytes, an IP header of 20
bytes, the size of an RTP header usually is 40 bytes. This is equal to the size of the average audio payload.
When you compress the header information, the total header size is reduced to 3 or 4 bytes. To enable the
RTP header compression option on your routers, refer to their documentation.
RTP flows use even port numbers. The recommended value for the port number is between 16384 and
65535. If you select the AUTO value for the port_number variable, Chariot uses an even port number in
this range.
The interarrival jitter is the jitter calculated continuously as each data packet (I) is r eceived from the
source. The jitter is calculated according to the formula defined in RFC 1889:
J=J+(D(I-1,I)-J)/16
Jitter is measured in timestamp units and is expressed as an unsigned integer. Whenever the endpoint
creates a timing record, the current value of J is sampled. This algorithm is the optimal first-order
estimator and the gain parameter 1/16 gives a good noise reduction ratio while maintaining a reasonable
rate of convergence.
Almost all data transfers have jitter. However, the amount of jitter and the relationship of the jitter to the
throughput indicates if the jitter is causing a problem on the network. Jitter can be caused by two
patterns. One pattern is the delay time for each packet steadily increases. In this case, the jitter values
increase and the throughput decreases. Another pattern is when the jitter increases, but the throughput
remains constant. In this case, the delay variation varies widely. This could cause problems for some
delay sensitive applications, but there is not a noticeable decrease in throughput.
Various elements in the network can cause jitter. When troubleshooting jitter, first run a test to
benchmark the amount of jitter received when just using the TCP/IP stack. Then run a test and add
network elements such as a router to determine which element is causing the jitter.
41
42
Another cause of jitter is router queuing algorithms. The combination of the queuing algorithm in the
router with the network configuration could cause jitter. To troubleshoot, trying running tests using
different queuing algorithms on the router.
IP Multicast
Chariot supports testing of IP Multicast. In IP Multicast delivery, an application sends data to a single
address, called a multicast group address. The routers in the network decide where to deliver the data,
based on whether other downstream applications are listening. The benefit of IP Multicast is that it avoids
multiple unicast connections to deliver data to multiple receivers. Unlike broadcasting, the data is only
sent to destinations where applications are listening and want to receive the data.
IP Multicast uses UDP to deliver data from one sender to multiple receivers. IP Multicast testing requires
Network Performance Endpoints at version 3.1 (or later). Any computer designated as Endpoint 1 can
send data to a group of multiple Endpoint 2 computers with a single UDP or RTP data stream. The
sender does not guarantee delivery of the data to the receivers.
Most multimedia applications stream data to their receivers, without expecting acknowledgment of
delivery. Chariot multimedia support is based on sending a stream of data between two or more
endpoints, without acknowledgments or retransmissions. Data may be sent as fast as possible or at a
controlled data rate. This data rate is controlled using the send_data_rate parameter of the SEND
command in the application script. You can also vary the file size (that is, the interval between timing
records) and the buffer size of the data to send.
In an IP Multicast group, receivers must subscribe to the multicast group prior to receiving data. The
multicast group is identified with the IP Multicast address and port. The IP Multicast address specifies
the multicast group to which data should be delivered. This class D IP address falls in a specified range.
The IP Multicast port identifies one of the possible destinations within a given host computer. Chariot
uses the combination of the multicast address and multicast port to uniquely identify a multicast group.
In IP Multicast delivery, you can constrain how far an IP Multicast packet is forwarded. Routers use the
Time To Live (TTL) value to determine when to stop forwarding packets. TTL is a router hop count, not
a time duration. Set the TTL so the sending endpoint can reach all the receiving endpoints in the group.
This should be the value used by the sender in the multicast application you are emulating.
To emulate a multimedia application, select one of the multimedia scripts in Chariot. When you select a
multimedia script, data is sent in only one direction. Throughput and lost data are calculated by Endpoint
2. Because of the nature of an unreliable protocol, lost and out-of-order data can occur. This does not
cause multimedia pairs to fail.
The Chariot test results contain information about the data quality, for pairs running multimedia scripts.
The Lost Data tab in the Test window lets you see how much data was lost during the run. Graphs of lost
data are associated with this tab, so you can graphically view how much or when the data was lost. You
can use the Datagram tab to view the number of lost or out of order datagrams.
43
44
The key flows in the above picture are numbered and described below.
1. A test is created at the Chariot console, and the user presses the Run button. The console sends the
setup information to Endpoint 1, using a TCP connection. The setup includes the following:
45
46
47
The sections on the Main window, the Test windows, and the Comparison window are broken into subsections
corresponding to their menu items. These, in turn, are divided into discussions of tabs and dialogs. This
hierarchical breakdown makes this chapter a little hard to read from front to back; use the Table of Contents,
the Index, and online searching to find specific topics.
2.
3.
48
Start the Chariot console program either by double-clicking on the Chariot Console icon in the Chariot folder,
or by entering the following at a command prompt:
d:\path\CHARIOT [test_filespec]
where d: and path are the drive and path where you installed the Chariot console. You can optionally enter the
filespec for an existing Chariot test file; Chariot loads that file as it starts itself.
Lets look at each of the windows, and how they help you with a test.
Welcome
Introducing Chariot
Operating the Console
Tips for Testing
Messages manual
Application Scripts manual
Performance Endpoints manual
In the Main window you can choose the Options Menu to change the defaults used by all the tests. It has two
menu items:
Change user settings shows a notebook where you can change the defaults used in the dialogs for creating
and running a test.
Change display fonts lets you change the font used in the Test window.
Choose the Tools menu to customize Chariot. It has six menu items:
Compare Tests lets you compare the results of multiple Chariot tests
Edit Scripts lets you modify existing scripts or create a new script
Edit Output Templates lets you save print options in a template
Edit IPX/SPX Entries lets you save IPX/SPX Entries in a template
Edit QoS Templates lets you work with Quality of Service Templates
View Error Log shows you the error log for the endpoint or the Chariot console
49
50
51
The Window Size is the number of bytes that can be sent to the partner, without an acknowledgment. The
number of datagrams sent in a window can be calculated by taking the Window Size and dividing by the
scripts send_buffer_size, and rounding this up (Chariot wont send partially full datagrams until the end
of a SEND command).
The Retransmission Timeout Period field is the number of milliseconds the sender will wait, after
sending for the first time or retransmitting a block to the receiver, to receive an acknowledgment that the
block was received.
The Number of Retransmits before Aborting field is the number of times the sender will resend a block
of data for which an acknowledgment is not received.
The following two options on this notebook page only apply to streaming scripts. See Modifying the
Multimedia Run Options for the Test on page 39 in the Working with Datagrams and Multicast chapter for
more information on these options.
The Receive Timeout field is the number of milliseconds the receiver waits before determining that the
streaming script has ended.
If Endpoint 2 is using Windows 95, 98, or NT, the minimum receive timeout value is 500 milliseconds.
The Multimedia Time To Live (TTL) field controls the forwarding of IP Multicast packets. Set the Time
To Live value based on how far you want the data forwarded. Expect to do some experimentation to find
the best combination of variable settings. If you change your mind, press Undo to reset all the fields to the
values you had before you made any changes.
This field defaults to a value of 1. However, the TTL value of 1 does not allow the packet to leave the
router. If you run a test with a TTL less than the number of routers, the test fails and you receive
CHR0216. You will need to adjust the TTL to run a test with a multicast group that crosses a router.
52
kBps
Kbps
1,024 bits per second (that is, 128 Bytes per second)
kbps
1,000 bits per second (that is, 125 Bytes per second)
Mbps
1,000,000 bits per second (that is, 125,000 Bytes per second)
Gbps
1,000,000,000 bits per second (that is, 125,000,000 Bytes per second)
We dont advocate changing your throughput units from KBps unless it is really necessary. Differing
throughput units can cause unexpected confusion when comparing results. Be especially careful that youre
using the same units when cutting and pasting exported values from different files.
These units do not affect other numbers, like transaction rate, response time, or relative precision.
On the Throughput notebook page of the Change User Settings notebook, you can press Undo to reset to the
units you had before you made changes.
53
Test summary and run optionsprovides a summary of any results and your run options
Pair summaryprovides pair information contained in the Test Setup and Results tabs of the Test Window
Pair detailsprovides the timing records for the pairs in your test
You can choose to report on all the pairs (Export all) or you can choose to report on only specific pairs or
groups of pairs (Export marked groups and pairs). This second radio button lets you choose the pairs that
have the mark symbol next to them, in the first column on the left-hand side of the Test Window.
54
Running a test (for more information, see Running a Test on page 81 in The Test Window section)
2.
We recommend using Auto (letting Chariot dynamically select the port number), unless your testing requires
that timing records use a specific port.
The Endpoint 1 through firewall to Endpoint 2 section of the dialog lets you select firewall options for
testing through firewalls from Endpoint 2. If you are testing through Network Address Translation (NAT)
firewalls, select the Use Endpoint 1 identifier in data option. See Testing Through Firewalls on page 138.
The endpoints will add a 4-byte correlator to the data sent. If you are testing through a firewall that does data
inspection, select the Use Endpoint 1 fixed port option. The endpoints do not send a correlator field in the
data.
Chariot does not currently support firewalls that do both NAT and data inspection.
If you change your mind, press Undo to reset all the fields to the values you had before you made any changes.
55
56
Modify the options you want to change in the output template. See Print and Export Options on page 62 in
The Test Window section for more information on these options.
To save your changes to the output template, press the OK button. The Output Template List dialog is shown.
57
58
No Traffic
In either the sending or receiving QoS specification, this value indicates that there will be
no traffic in this direction. On duplex-capable media, this signals underlying software to
set up unidirectional connections only.
Best Effort
The service provider takes the QoS specification as a guideline and makes reasonable
efforts to maintain the level of service requested, without making any guarantees on packet
delivery. The network routers do not guarantee prioritization of the data.
Controlled
Load
The network router gives priority to the data and operates like the data is only data on the
network at the time. Thus, this service may assume:
A high percentage of transmitted packets will be successfully delivered by the network to
the receiving end nodes. (Packet loss rate should closely approximate the basic packet
error rate of the transmission medium.)
Transit delay experienced by a high percentage of the delivered packets will not greatly
exceed the minimum transit delay experienced by any successfully delivered packet.
Guaranteed
This service type value is designed for applications that require a precisely known quality
of service but world not benefit from better service, such as real-time control systems. The
service provider implements a queuing algorithm which isolates the flow from the effects
of other flows as much as possible and guarantees the application the ability to propagate
data at the Token Rate for the duration of the connection. If the sender sends faster than the
Token Rate, the network may delay or discard the excess traffic. If the sender does not
exceed the Token Rate over time, then Latency is also guaranteed.
General
Information
No Change
In either sending or receiving QoS specifications, this level of service requests that the
quality of service in the corresponding direction is not changed. Select No Change when
requesting a QoS change in one direction only, or when requesting a change only in the
Provider Specific part of a QoS specification and not in the Sending Flowspec or the
Receiving Flowspec.
59
The Token Rate field and the Token Bucket Size field describe the bandwidth, which is the rate at which a
stream of data can be sent. These fields are designed to efficiently accommodate transmissions that vary in
rate. The basic concept portrays a bucket that is filled with tokens at the specified token rate. Each token lets
an application send a certain amount of data.
Token Rate (bytes/sec)
This field defines the burst rate. If packets are sent out uniformly at the token rate, the bucket remains
empty. Each outgoing packet is matched by one token. If a packet is sent without a matching token, the
packet may be dropped. If the transmission rate is less than the token rate, the unused tokens accumulate
up to the token bucket size. The Token Rate field is expressed in bytes/second. A value of No Rate
indicates that no rate-limiting is enforced. If this is the case, the Token Bucket Size field is not
applicable.
To transmit without losing packets, the following must be configured:
Set the Token Rate field at or above the average transmission rate.
Set the Token Bucket Size field large enough to accommodate the largest expected burst of data.
If data is sent at a low rate for a period of time, it can send a large burst of data all at once until it runs out
of tokens. Afterwards, data may only be sent at the token rate until its data burst is exhausted.
Token Bucket Size (bytes)
This field controls the size of data bursts in bytes, but not the transmission burst rate. If packets are sent
too rapidly, they may block other applications access to the network for the duration of the burst. This
field is the largest typical frame size in video applications, expressed in bytes. In constant rate
applications, the Token Bucket Size is chosen to accommodate small variations. In video applications,
the token rate is typically the average bit rate peak to peak. In constant rate applications, the Token Rate
field should be equal to the Maximum Transmission Rate field.
Latency (microseconds)
Enter the maximum acceptable delay in microseconds between transmission of a packet by the sender and
its receipt by the intended receiver or receivers. The precise interpretation of this number depends on the
service type specified in the QoS request.
Delay Variation (microseconds)
This field contains the difference, in microseconds, between the maximum and minimum possible delay
that a packet experiences. This value is used to determine the amount of buffer space needed at the
receiving side in order to restore the original data transmission pattern.
Maximum Transmission Rate (bytes/sec)
This field is where you specify how fast packets may be sent back to back expressed in bytes/second. This
information lets intermediate routers efficiently allocate their resources. Some intermediate systems can
take advantage of this information resulting in a more efficient resource allocation. In this field, enter the
maximum packet size in bytes that is used in the traffic flow.
Maximum Transmission Size (bytes)
This field is where you specify the largest packet size, in bytes, that is permitted in your network. QoS
enabled routers use this in their policing of multicast network traffic.
Minimum Policed Size (bytes)
Enter the minimum packet size in bytes that will be given the level of service requested.
60
Select the General Help menu item to get descriptive information about the window you are currently viewing.
Select the Using help menu item for guidance on using the online help in Chariot.
Select the Keys help menu item for a list of all keys and key combinations available in each window.
Select the About Chariot menu item for details on the Chariot version and build level, and for information
about service and support.
F2
F3
exit Chariot.
F9
F10
F11
get the About Chariot dialog, which shows your version and build level, and lets you get
product support information.
Ctrl+N
set up a new Chariot test. This brings up a new Test window, and lets you immediately
begin defining the first endpoint pair in the test.
Ctrl+O
Alt+F4
this key combination can be used to close any window or dialog. When used to close a
dialog, it has the same effect as pressing the Esc key or pressing Cancel with the mouse.
In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.
61
The File Menu (Test Window)to save, export, print, or clear results
The Edit Menu (Test Window)to add, change, copy, or delete pairs
The View Menuto change how pairs are grouped and shown
Running a Testto control the running of the test
Helpto get more information
Tabs:
The Test Setup tab is always available; the last seven tabs are only available when a test has results. Pressing
the Throughput, Transaction Rate, Response Time, Lost Data, or Jitter tab causes the appropriate graph to be
displayed. Choosing one of the other three tabs causes the pair information in the top portion of the Test
window to change; the graph display at the bottom remains the same.
62
Start status (shown when the test is initializing and running) gives the overall progress of the test. When
the test ends, this field displays the start date and time of the current test.
End status (only displayed while the test is running) displays the elapsed time (hr:min:sec). When the test
ends, this field shows the ending date and time.
Duration (displayed while the test is running) estimates the time remaining until the test will complete
(hr:min:sec). This shows no value until enough timing records are received to calculate an estimate.
When the test ends, this field shows the actual total run time.
You might see the estimated remaining time increase in large increments, if one or more pairs are using
random sleep durations.
Completion status (only displayed when the test ends) displays whether the test: ran to completion; was
stopped by the user; or stopped because an error was detected.
63
Select the Export menu item to export any aspect of the test. You select which format you want to export the
test and which aspects of the test you want to export.
You can export formatted results to
We have added support of the CSV file format in this release to allow you to export data to a comma-separated
value format that can be read into most spreadsheet applications. See Export Options for CSV file on page
65 for more information on the CSV file format.
The WK3 export format will not be supported in the next release of Chariot. In this release, the WK3 file
format does not support the new features added for Chariot 3.1, such as jitter data and CPU Utilization data.
This information is not included in tests exported to this format.
See the File Types and How They are Handled section on page 108 for information on how these files are
handled.
The Print Options (or Export Options) dialog lets you decide how much of the current test you want to print or
export. Remember that tests with lots of pairs and lots of timing records can take lots of paper!
Before choosing print or export options, make sure that the pairs of interest are expanded in the Test window.
If the group you want to print or export from is collapsed, double click on the group to expand it, so that the
pairs are shown. If a group in the Test Window is collapsed, its pairs are not detailed when you export or
print.
Having chosen the pairs you want to include in your report, you can next choose how much detail you want to
see. Because the CSV file format is read by a computer software program, content of all CSV files must look
the same. Therefore, you are limited in the export options regarding the content exported. See Export
Options for CSV file on page 65 for information on the CSV file format and the Export to CSV File dialog.
Chariot lets you create output templates for your print/export options. You can then select a template when you
are printing or exporting without having to reselect the options each time. The first time you access the
Print/Export dialog for each test during a Chariot session, the Output Template field defaults to the output
template selected on the Output tab of the User Settings notebook. See Changing Your Output Defaults in
The Main Window section for more information on selecting default output templates.
To use an output template you have previously created or to modify an output template, select the output
template from the Output Template field. To create a new output template from this dialog, enter the name of
the new output template in this field. The special characters *,\, and ? are not allowed in an output template
name.
You can also create new templates, modify existing templates, delete templates, and create a copy of a template
by selecting the Output Templates menu item from the Tools menu on the Main window. See Working with
Output Templates in The Main Window section for more information.
64
Summary report
Show just the test setup and a summary of any results. You can also choose to see the information shown
in each of the tabs and graphs.
Check Result Tables to see the information summarized in the Throughput, Transaction Rate,
Response Time, Lost Data, and Datagram tabs.
Check Result Graphs to output the corresponding graphs. When exporting, GIF files are created.
This option is only available for HTML Export.
Complete report
Show everythingall the setup information, all the scripts, all the results analysis, and all the individual
timing records. Not recommended unless you have small tests or lots of paper!
Custom
This button offers you a dialog box that lets you choose precisely what to show in your report or display the
selections from an output template you have created. See Custom Print and Export Options for more
information on this dialog.
As another simplification you may also want to print information about a limited number of pairs. To do this,
use the 'mark' column in the Test window to choose only the relevant pair or pairs. Click on the Print icon
and go to the scope group in the middle of the dialog. Select the Print marked groups and pairs radio
button. This limits the output to only marked groups or pairs. Next, press the custom radio button, then
deselect everything except the parameters that are of interest to you.
If you want to save the options you have selected on this dialog with the test, select the Save options with test
checkbox. The next time that you access the Print dialog for this test, the options you have selected will be
filled in. If you created a new output template or modified an existing output template, press the Save
Template button. This saves your print options to the selected output template.
At the bottom of the dialog, Chariot shows the approximate number of pages to be printed based options you
have selected.
When setting options for printing, you can press the Select printer button to choose among the printers
defined at your computer. From the Properties button, you might also choose landscape or portrait printing
which may offer you a more readable view of your results. If you have long addresses or comments, we
recommend landscape printing to get the extra width.
Test setupprovides information contained in the Test Setup tab of the Test window
Throughput resultsproduces results displayed in the Throughput tab of the Test window
Transaction rate resultsproduces the results displayed in the Transaction Rate tab of the Test window
Transaction rate graphproduces the graph which depicts the transaction rate results
Response time resultsproduces results displayed in the Response Time tab of the Test window
Response time graphproduces the graph which depicts the response time results
Lost data resultsproduces results displayed in the Lost Data tab of the Test window
Lost data graphproduces the graph which depicts lost data results
Jitter resultsproduces results displayed in the Jitter tab of the Test Window
Datagram resultsreports results displayed in the Datagram tab of the Test window
Raw data totalsinformation displayed in the Raw Data Totals tab of the Test window
65
When you choose to export to HTML, the graphs are output as separate files in GIF format. A GIF file can be
imported into your favorite word processor or graphics program, as well as being linked to the Web page
created by the export. See the File Types and How They Are Handled section on page 108 to understand the
filenames used for the GIF files.
None of the options in the Detailed Information default to selected. Click on the Detailed information
options that you wish to include in your report.
Scriptsthe script commands and variables used for your test are reported
Endpoint configuration detailsdetailed information about the endpoints in your test are reported. You can
see this information by choosing the Endpoint configuration menu item from the View menu.
Timing recordsthe individual timing records for the pairs in your test are reported. There can be a lot of
thesetens of thousandsso we recommend getting an idea of the number by selecting the Raw Data Totals
tab.
All of these options can be chosen by pressing the Select all button. A checkmark appears beside each of the
options, indicating that it is selected. You can uncheck all the options by pressing the Deselect all button.
One or more of these Print or Export options may appear grayed, indicating that the option(s) is not available
to be reported on for this particular test. An option is enabled only when information of that type exists for the
test.
At the bottom of the dialog, Chariot shows the approximate number of pages to be printed (when setting
options for printing).
66
There are two special characters in CSV format which need special treatment: commas and double-quotes.
If a comma is in a string, surround the whole string with double quotes. For example, the comment field
below contains two commas:
"These commas, here, are part of the comment."
Surround each double quote in a string with its own pair of double quotes. For example, the comment
field below contains two sets of double quotes:
This endpoint is used in the """R&D1""" department.
The Export to CSV dialog is shown when you select the Export to CSV menu item from the File menu. You
can select the type of content you want exported from the test. Remove the checkmark from those options that
you do not want included in your report.
Test summary and run optionsprovides a summary of any results and your run options
Pair summaryprovides information contained in the Test Setup tab of the Test Window
Pair detailsprovides the timing records for the pairs in your test
You can choose to report on all the pairs (Export all) or you can choose to report on only specific pairs or
groups of pairs (Export marked pairs). This second radio button lets you choose the pairs that have the
mark symbol next to them, in the first column on the left-hand side of the Test Window.
Printer Options
Select the printer for your output, from among the printers currently defined on your computer by pressing the
Select printer button. To change the characteristics of that printer, choose the Properties or Job properties
button.
2.
Press Ctrl+X
3.
67
If the Test window contains test results, you are prompted for confirmation of the cut, since a cut operation
clears all test results. At this point, you can cancel the cut operation or proceed with the cut. After the cut
operation is successfully executed, the Paste menu item under the Edit menu and the Paste icon on the toolbar
are enabled.
Copy
You can copy an existing pair or group of pairs from the Test window as follows. First, select the pair(s) to be
cut by clicking on the individual pair. Once selected, the pair is highlighted. When copying multiple pairs,
hold down the Shift key and click on the first and last pairs to be copied. The two pairs on which you clicked
as well as all pairs in between are highlighted. Once selected, you can copy pairs in one of three ways:
1.
2.
Press Ctrl+C
3.
After the copy operation is successfully executed, the Paste menu item under the Edit menu and the Paste icon
on the toolbar are enabled. Copy does not cause test results to be cleared.
Paste
You can paste data that has been cut or copied within Chariot. Data that has been put on the clipboard by
another application results in a disabled Paste menu item under the Edit menu and a disabled Paste icon (two
paper sheets) on the toolbar. However, it is possible to paste data cut or copied by Chariot into other
applications that accept tab-delimited data, such as spreadsheets and editors.
To perform a paste operation, you must have successfully completed a cut or copy operation. A paste operation
can be performed in one of three ways:
1.
2.
Press Ctrl+V
3.
Before pasting the clipboard onto the selected Test window, Chariot ensures that the paste operation does not
exceed the licensed number of endpoint pairs. If this number will be exceeded by the current paste operation,
you are told by a dialog and the paste is aborted; otherwise the paste continues.
If the Test window contains test results, you are prompted for confirmation of the paste, since a paste operation
clears all test results. At this point, you can cancel the paste operation or proceed.
Delete
You can remove an existing pair or set of pairs from the Test window. First, select the pairs(s) to be deleted,
by clicking on the row or rows. When selected, the rows are highlighted. When deleting multiple pairs, hold
down the Shift key and click on the first and last rows to be removed. The two rows on which you clicked, as
well as every row in between, are highlighted. Then, delete the highlighted pairs in one of the following three
ways:
1.
2.
Press Ctrl+D
3.
68
A warning box appears asking, Are you sure you want to delete the selected endpoint pair(s)? Press the Yes
button to continue deleting or press the No button to cancel the delete. Upon pressing the Yes button, the
highlighted pair(s) is removed from the Test window.
Select all pairs
To select all of the pairs in the Test window, go to the Edit menu and choose Select all pairsor press Ctrl+A.
All of the pairs in the Test window are highlighted, to indicate that they are selected.
Deselect all pairs
To deselect all of the highlighted pairs in the Test window, go to the Edit menu and select Deselect all pairs.
All pairs which were previously highlighted are no longer selected.
Mark selected items
You can mark any pair or group shown in the Test window. Mark a pair or group when you specifically want
to include it in a graph or in a printed report. New pairs are initially marked when theyre created.
Choose this menu item to mark all the pairs and groups that are currently shown as selected (highlighted) in
the Test window. The fact that the pair or group is marked is displayed on the left side of the Test window.
Unmark selected items
In contrast to marking pairs or groups (as discussed above), you can unmark any pair or group shown in the
Test window. Unmark a pair or group when you specifically dont want to include it in a graph or printed
report.
Choose this menu item to unmark all the pairs and groups that are currently shown as selected (highlighted)
in the Test window. The fact that the pair or group is no longer marked is displayed on the left side of the Test
window.
Edit
You can modify an existing pair or multicast group in the Test window. First, select the pairs(s) or multicast
group to be edited by clicking on the row or rows. When selected, the rows are highlighted. Then, edit the
pair(s) in one of the following three ways:
1.
2.
Press Ctrl+E
3.
If you highlighted a single pair, the Edit an Endpoint Pair dialog opens to let you modify the pair(s). See
Adding or Editing an Endpoint Pair on page 70 for more information on modifying a pair. If you
highlighted a single multicast group or a pair in a multicast group, the Edit a Multicast group dialog opens to
let you modify the multicast group.
If you highlighted pairs, the Edit Multiple Endpoint Pairs dialog opens, enabling you to modify the definition
of many pairs simultaneously. You cannot edit multiple multicast groups at the same time. If you highlight
multiple multicast groups or a combination of multicast pairs and single pairs, the Edit command is not
available.
Note that the Edit feature, in combination with the Replicate feature, is a quicker method of adding new pairs,
compared to the Add pair and the Add Multicast group menu items. To add a new, unique row or group of
rows, select the Replicate command. Then, select the Edit command to open the Edit an Endpoint Pair, Edit
Multiple Endpoint Pairs, or Edit a Multicast Group dialog. Inside these dialogs, you can modify the definition
of a pair or set of pairs to contain the specific test information you want.
69
2.
If you highlight pair(s) and multicast group(s), the Replicate menu item is not available. You must replicate
pairs and multicast groups separately. Upon performing one of these steps, the Replicate Selected Pairs or
Replicate a Multicast Group dialog opens. See Replicating Pairs in a Test or Replicating a Multicast Group
in a Test for more information.
Add pair
You can add a new pair to the Test window in three ways:
1.
2.
Press Ctrl+P
3.
Upon performing one of these steps, the Add an Endpoint Pair dialog appears. See Adding or Editing an
Endpoint Pair for more information on adding a pair.
Add Multicast group
You can add a new multicast group to the Test Window in three ways:
1.
2.
Press Ctrl+G
3.
Upon performing one of these steps, the Add a Multicast group dialog box appears. See Adding or Editing a
Multicast Group for more information on adding a multicast group.
Renumber all pairs
After having edited or deleted pairs within a Test window, the numbering in the Group column may no longer
be sequentialpairs arent automatically renumbered. Your list of pairs may start with a number other than 1,
may have numbers from the sequence missing, or may be numbered out of sequence. To correct these
problems, invoke the Renumber all pairs menu item in one of the following ways:
1.
2.
Press Ctrl+N
3.
The pairs in the Test Window are renumbered from 1 in sequential order without missing numbers.
70
For APPC, enter fully-qualified LU names or LU aliases in these fields. Examples of fully-qualified LU
names are GANYMEDE.JOHNQ or USGOVNSA.I594173.
When using LU aliases, ensure they are defined correctly in the APPC configuration on the endpoint
computers themselves, not at the console. LU aliases are case-sensitive, so be sure to use the correct
capitalization.
For IPX or SPX, enter an IPX address in hexadecimal format or enter its alias. An example of an IPX
address is hex format is 03F2E410:0A024F32ED02. The first 8 digits (4 bytes) are the network number;
the 12 digits (6 bytes) following the colon are the node ID (you may also hear these referred to as the
network address and node address).
But, no one wants to enter hex numbers like that more than once. Chariot lets you have your own aliases
for these addresses, which are much easier to type and remember.
For RTP, TCP or UDP, enter an IP address, in domain name or numeric format. An example of domain
name format is www.Ganymede.com, while an example of numeric format is 199.72.46.202.
As you enter the network addresses of endpoints, Chariot remembers them for you, making it easier to set up a
test the next time. The names are saved in a file named ENDPOINT.DAT, which you can edit with an ASCII
text editor. This lets you add, modify, or delete entries in your list of network addresses.
Be sure that you do not enter an IP Multicast address in the Endpoint 1 network address and Endpoint 2
network address fields. If you enter an IP Multicast address in these fields, the test fails with error CHR0209.
Select one of the available communication stacks from the Network protocol pulldown. When this test is run,
that protocol must be correctly configured and started on the endpoint computers. If you selected a Streaming
script, you must select either IPX, RTP, or UDP. If a service quality is required by the network protocol, enter
or select a value defined on the endpoint computers. The service quality values you enter are remembered in
file SERVQUAL.DAT.
For RTP, TCP or UDP, see Working with Quality of Service (QoS) Templates on page 57 in The Main
Window section for information on setting up QoS templates.
For APPC, see Selecting a Service Quality (APPC Mode Name) on page 24 in the Configuring Chariot
in Your Network chapter for information on APPC mode names.
You can choose to edit the existing script associated with an endpoint pair, or to open a new script. If this is a
new pair, your only choice is to associate a script with this pair, by selecting the Open a script file button.
After you have associated a script with an endpoint pair, you can edit the script by selecting the Edit this
script button. See the Messages and Application Scripts manual for information on scripts.
We recommend always entering a descriptive Pair comment. This lets you identify each pair in the Test
window easily.
71
72
APPC is supported at the console), you might use the #INTER mode between Endpoint 1 and Endpoint 2 (for
speediest performance) and use the #BATCH mode between Endpoint 1 and the console (for less-disruptive
delivery of results).
Endpoint computers can have multiple network addresses. For example, it is common for a computer with
multiple adapters to have multiple IP addresses. You can make use of both addresses at Endpoint 1. Specify
one of the addresses for the connection between Endpoint 1 and Endpoint 2. Specify the other address as the
target for the connection from the console, in the field named How does the Console know Endpoint 1?.
If using APPC, enter a mode name in the Service Quality field. However, if using TCP, a QoS cannot be used
on the connection between the console and Endpoint 1.
73
2.
When grouping the pairs using the Group by menu item under the View menu, a list appears to the right of
the menu item. Choose the criteria by which the pairs should be grouped. The pairs in the window are then
displayed in groups. To group pairs using the toolbar icons, click on the applicable icon:
ALL
No grouping
TCP
SCR
EP1
EP2
SQ
PG
Upon grouping pairs, the groups are listed in ascending order. You can rearrange them in descending order;
use the Group sort order (described above) to change the display of groups in ascending or descending order.
Information
The window is divided into eight tabbed areas. If you are in the Test window and a test is new, only the Test
Setup tab is shown. While a test is running, or after a run, all eight tabs can be viewed. Only tabs relevant to
the results are shown. For example, if the test does not contain RTP pairs, the Jitter tab is not shown.
Each of these eight areas is identified by a labeled tab on which you may click to open the window. Rather
than clicking on tabs to move between areas, you can choose Information under the View menu.
Note that the Information menu item in the Test window is grayed until after a test has been run.
74
Choose Expand all tests and groups under the View menu
2.
The test setup and results are displayed for all of the pairs.
Collapse all tests and groups
Groups of pairs can be collapsed to decrease the level of detail displayed in the Test window. Groups can be
collapsed in two ways:
1.
Go to the View menu and select the Collapse all groups item
2.
The information for the pairs is summarized and displayed by groups. Individual groups may be expanded or
collapsed by double-clicking on the group.
Throughput units
Your throughput results can be shown in different units of measurement. To change the unit of measure by
which your throughput results are shown, choose the Throughput units item from the View menu. In the
Throughput Units dialog box, which opens, select the unit of measurement you wish to use.
Show error message
If theres an error associated with the selected endpoint pair, choosing this menu item causes Chariot to show
the Error Message dialog for the message.
Show timing records
Choosing this menu item shows the individual timing records associated with the results for this pair.
Show endpoint configuration
Choosing this menu item shows extensive details about the endpoint programs, and the operating systems and
protocol stacks they are using. This information differs among operating systems and endpoint versions. See
Endpoint Configuration Details for an example of whats shown.
75
When you have defined your sort, press the OK button to reort the Test window according to your
specifications.
To change your mind and avoid sorting, press the Cancel button.
Graph Configuration
A graph is always shown at the bottom of the Test window when results are available. The Graph
Configuration dialog lets you choose what type of graph is shown.
Choose among line-, bar-, or pie-type graphs. Histograms are shown as bar graphs. The type of graph applies
whether youre viewing throughput, transaction rate, response time, lost data, or jitter.
Press the Pairs or Groups button to decide how the results are aggregated in your graph. The graph youve
chosen shows either the pairs youve marked (giving you more detail) or the groups youve marked (giving you
a way to combine many pairs).
You can also choose a legend, which shows the color and patterns used for each pair or group. Additionally,
you can choose whether to see a gridthin lines which help you visualize your data better.
Press the Axis Details button for further control over how the graph is shown.
Press the Apply button to see the immediate effect of your choices. If you like what you see, press OK. If you
want to reset your changes and try again, press Undo. Press Cancel to close the dialog box without
remembering the changes youve applied.
76
Repeat the steps above on the Maximum line to change the maximum end of the Elapsed time range. The
value that you type on the Maximum line determines the end of the Elapsed time range shown on the line
graph. For example, if you enter 3 in the Min field, the Elapsed time in the line graph ends at 3 minutes. Test
results after three minutes are not displayed in the line graph.
The vertical axis of the line graph depicts the data for the tab selected on the Test window. In the Axis Details
dialog box, you may specify the results range displayed on the vertical axis. The results range defaults to
contain minimum and maximum ends which cover the total range of Throughput, Transaction Rate or
Response Time. You may decrease or increase the range of the vertical axis by changing the minimum and/or
maximum range ends.
Click on the tab for the type of line graph you wish to modify. To change the lowest value on the vertical axis,
click in the Minimum Auto box to remove the checkmark. Upon clearing the box, the Minimum line appears
in black type, indicating that it is enabled. Enter a value in the Minimum field which determines the lowest
value on the vertical axis (such as the lowest #/second for Transaction Rate). For example, if you enter 1600 in
the Transaction Rates Minimum field, the vertical axis begins at 1,600/second. Transaction Rate results
below 1,600/second are not displayed in the line graph.
Repeat the steps above on the Maximum line to change the maximum end of the results range. The value that
you type on the Maximum line determines the highest value on the vertical axis. For example, if you enter
3000 in the Transaction Rates Maximum field, the highest point on the vertical axis is 3,000/second.
Transaction Rate results above 3,000/second are not displayed in the line graph.
77
2.
Press Ctrl+R
3.
Once a test is started, the Run icon changes from a running man with a green light to a red stop sign. After the
test has successfully run, the Run icon returns to a running man and the Run Status changes to Completed.
In addition, six new tab areas appear in the Test window. You can click on these tabs to view different aspects
of the test results. See Running a Test to understand what occurs while a test is run.
78
Stop
You can stop a running test in two ways:
1.
2.
Press Ctrl+T
When you stop a running test, a warning box appears asking, A test is currently running. Do you want to stop
the test? Press Yes to stop the test or press No to resume the running of the test for more information about
stopping. See Stopping a Running Test for more information on stopping.
Set run options
You can choose parameters for how one test is run by selecting the Set run options item from the Run menu.
A two-page notebook is shown and contains a Run options and a Datagram page.
Poll endpoints now
You can cause the console to contact each of the Endpoint 1 computers in a test, while a test is running. The
endpoints reply, returning the number of timing records theyve created so far in this test. You can poll the
endpoints while a test is running in three ways:
1.
2.
Press F5
3.
See Polling the Endpoints for information about why youd choose to poll during a running test.
79
80
81
Running a Test
Before running a test, the endpoint programs for the pairs in your test must be active, as well as their
underlying network software.
Be sure the underlying network software for the protocols you are using is configured and active at the
console and each endpoint in the test. It is probably best to start the network software at boot time.
Be sure the Endpoint program is active on every endpoint participating in the test. When the endpoint
program is running (and its output is visible), it shows whether it is successfully accessing the underlying
network protocol.
There are two ways to run a test from the console. You can either use the graphical user interface of the
Chariot console program or use the command-line program named RUNTST. Both use the same underlying
software, so the test results from each are the same.
82
See RUNTSTRunning Tests on page 111 in the Using the Command-Line Programs chapter for details on
running a test from the command line.
More typically, youll run tests using the graphical console program. In a Test window, select the Run item
from the Run menu of press the Run button.
Before running a test, we recommend saving your test setup. Test runs involve complex, extended interaction
with multiple computers and network programs. Unexpected errors can possibly cause problems where a run
cannot complete, and thus the test cannot be saved. Saving the test before the run lets you avoid recreating the
setup you did of endpoint pairs and scripts.
Youre running the test in Batch operation, and youd like to know how many timing records have been
generated. The endpoints return just the number of timing records; they dont send the timing records
theyre holding, but havent yet sent.
You suspect that one or more Endpoint 1 computers can no longer be reached. If Endpoint 1 is powered
off during a test, the console never actually knows, since it doesnt maintain a connection while a test is
running. Polling forces the console to reach each Endpoint 1.
You can adversely affect your test results by polling too frequently.
The more endpoints involved in a test, the less often Chariot refreshes status changes in the Test window. This
is done to reduce the overhead of updating records at the console. For large tests, this refresh period can be as
long as five seconds. The true status of the endpoint (polling or running) may not be reflected for several
seconds, even though the run status has changed while the test is running.
A script is sending a large amount of data inside a transaction. The endpoint does not stop until it reaches
either an END_LOOP or END_TIMER command.
2.
3.
83
waiting several minutes before starting another test to the same endpoints, if youve abandoned a run. If, on a
subsequent run after abandoning a run, Chariot encounters endpoint errors or an endpoint does not respond,
you may need to restart the endpoint.
When a pair fails, Chariot informs all the other pairs to stop after their initialization step unless you have
specified on the Run Options for Chariot to not do this. If your test appears to be hung in initializing state
after a pair fails, it may because there is a network problem that keeps Chariot from detecting the failure. In
this case, you may need to stop the test manually.
You can stop a running test by:
2.
3.
Initialized
An individual endpoint pair reaches this stage when it has completed Initializing, and reported back to
the console. When all endpoint pairs reach this stage, the console issues the calls to start all the scripts
executing.
n/a
The status n/a means that either the test has not started running or the test has completed running but does
not have enough information to return the data.
Running
The scripts are running between the endpoints in each endpoint pair.
If youve set up the test to Run until any script completes, Chariot shows the estimated time remaining
in the status bar.
84
Polling
The console can poll endpoints on a timed basis; you can also manually poll by pressing the Poll icon on
the toolbar. When you poll the endpoints during a running test, the console sends a message to each of the
Endpoint 1 computers in the test. An endpoint pair is in Polling status when it is returning the number of
timing records generated so far for this test.
Requested stop
The test is over; the console has sent a request to each endpoint pair to stop the script now executing. An
endpoint pair has the Requested stop status while the console waits to hear back from Endpoint 1 in each
pair that it is now stopping.
Stopping
This stage can be reached in three conditions:
Some endpoint pair completed and you had chosen to end the run when the first endpoint pair
completes or run for a fixed duration. That has occurred, and the console is stopping all the
remaining endpoint pairs.
Youve chosen Stop a running test from the Results window menu, or youve chosen to close the
window. The console is stopping all the running endpoint pairs.
An error occurred on one of the pairs, so the console is stopping the other active pairs.
Running tests are not stopped in the middle of a transaction; endpoints only stop after an END_TIMER
command. Stopping can take a long time if youre running a test with large SEND sizes (say, youre
simulating a file transfer or more than 10 million bytes over a LAN). See Script Command Descriptions
in the Working with the Script Editor section for more information about the END_TIMER and SEND
commands.
Stopping can also take between 20 and 50 seconds when running pairs using SPX on Windows NT, doing
loopback (both endpoints have the same address). If the endpoint is on a RECEIVE call, the protocol
stack can pause for almost a minute before returning.
Finished
The run has completed. If the test ran long enough so that results were generated, they are shown.
Error detected
At least one of the endpoints has detected an error, which it has reported to the console. Depending on
your Run Options settings, the console may now be trying to stop the other running pairs. Subsequent
timing records received by the console will be discarded.
Abandoned
The running test was stopped by you , then you pressed the Abandon Run button. This endpoint pair was
running, but the console has abandoned it without waiting for the remainder of its timing records. The
two endpoints may still be executing their scripts and attempting to send timing records back to the
consolewhich is now discarding them. After abandoning endpoints that you think were very busy, it is
best to wait about two minutes before starting another performance test.
85
Select Comparison window to bring Chariots comparison window to the foreground. If you are currently
viewing the Comparison Window, this menu item is not displayed.
Each open test window (up to nine) is displayed on the menu. Select the name of the test to bring the Test
window to the foreground.
86
The Relative Precision column gives you a feel for the consistency among a pairs timing records. The
Confidence Interval column and Relative Precision column display n/a while a test is running; real
numbers are shown when a test is over (and there are at least 2 timing records).
If you choose to display this data using the bar graph with max/avg/min, the display shows (for each pair) the
maximum value at the top of the upper bar segment, the average value at the top of the middle bar segment,
and the minimum value at the top of the lower bar segment.
87
The Lost Data graph for this tab shows the percentage of lost bytes for the selected pairs/groups over elapsed
time. This graph can show you at what time during the test the data was lost. If there is no lost data for the
selected pairs/groups, this graph is shown empty.
If you choose to show this data in a pie graph, the graph shows cumulative totals. A single group or pair shows
the same as multiple groups/pairs.
88
89
The Duplicate DGs Sent by E2 column shows the number of datagrams Endpoint 2 had to retransmit because
it didnt receive an acknowledgment from Endpoint 1 before the Retransmission Timeout period expired. The
number of duplicates sent is the number of duplicates received plus the number of datagrams lost. This column
is only applicable for tests using non-streaming scripts.
The Total DGs Received by E2 column shows the number of datagrams received by Endpoint 2. This column
is shown for tests using either a streaming or non-streaming script.
The Duplicate DGs Received by E2 column shows the number of datagrams with the same sequence number
that were received by Endpoint 2. This column is applicable for streaming or non-streaming tests.
The number shown in the Datagrams lost, E2 to E1 column is an approximation: some of these lost
datagrams sent by Endpoint 2 may have been merely delayed in the network and would have been received by
Endpoint 1 given enough time. Once a script completes, however, theres no longer any need for Endpoint 1 to
wait to receive those datagrams.
The Datagrams Out of Order column shows the number of the datagrams that are received out of sequence.
This column is only applicable for tests using streaming scripts.
Here are some hints and tips for interpreting the data shown on this tab:
If the number shown in the Duplicate Datagrams Received by E1 column is large as a percentage of
Total Datagrams Received by Endpoint 1 column, consider setting the Retransmission Timeout higher,
to prevent Endpoint 2 from retransmitting too often. However, if datagrams are being lost, changing this
will only increase the number of duplicate datagrams.
If you are using a non-streaming script and the number shown in the Datagrams lost, E1 to E2 column is
too large, the Window Size parameter in the Datagram Run Options is too large or the network is being
used by too many applications at the same time.
If the test is using a streaming script and there is a large number of duplicates at Endpoint 2, there is
probably a problem in the network configuration, such as a loop. Look at the configuration of network
elements, such as a router, to find the problem.
If you are using a streaming script and there is a high number of lost datagrams in the test, changing the
data rate to a slower data rate. The sender may be sending the datagrams faster than the receiver can
receive the data.
90
Non-Streaming Scripts
For pairs that did not run a streaming script, the following columns are shown in the Timing Records dialog.
In addition, if a datagram protocol is used for an endpoint pair, datagram statistics are shown on the far right
side.
Streaming Scripts
For pairs that ran a Streaming script, the following columns are shown in the Timing Records dialog.
91
F2
F5
F9
F10
F11
get the About Chariot dialog, which shows your version and build level, and lets you get
product support information.
Ctrl+A
Ctrl+C
copy the test setup for one or more pairs to the clipboard.
Ctrl+D
delete the highlighted endpoint pair. You are asked whether you are sure you want to delete
that pair.
Ctrl+E
edit the highlighted endpoint pair (s). This causes Chariot to show the dialog box with all
the information about the highlighted endpoint pair (if just one is selected), or to appear with
blank fields (if more than one are selected). You can change any of the names, addresses, or
other values associated with the pairs.
Ctrl+G
Ctrl+N
Ctrl+P
add a new endpoint pair to a test. If youre working with a test that already has results and
you attempt to add a new pair, Chariot asks you whether you want to discard your existing
results.
Ctrl+R
run this test. Only one test can be run at a time, to avoid conflicting performance data.
Ctrl+S
save this test setup and its results to a Chariot test file. If the test is untitled, the Save As
dialog prompts you to choose a filename.
Ctrl+T
Ctrl+V
paste the test setup for one or more pairs from the clipboard.
Ctrl+X
cut the test setup for one or more pairs to the clipboard.
Alt+F4
this key combination can be used to close any window or dialog box. When used to close a
dialog box, it has the same effect as pressing the Esc key or selecting Cancel with the mouse.
In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.
92
93
Tabs:
See the File Types and How They Are Handled section on page 108 for information on how these files are
handled.
Select the Close menu item to exit the Comparison window.
Saving a Comparison
From the Comparison window, you can save the comparison you are currently viewing. Chariot saves all the
titled tests filenames (with directory location), the current grouping settings, sorting settings, graphing
settings, current notebook tab, and the current throughput units being used. Untitled tests are not stored in the
comparison.
If you want to save a comparison for the first time or want to save an existing comparison under a different
name, select the Save Comparison As menu item from the File menu. The Save Comparison As dialog
appears. In the Save Comparison field, enter the name you want to save the comparison under. You can also
select an existing comparison name. The special characters *,\, and ? are not allowed in a comparison name.
To save the comparison, press the OK button.
94
If you want to save a previously saved comparison under the same name, select the Save menu item from the
File menu. Chariot saves the open comparison under the current name. If you expand a test or mark pairs in
the Comparison Window, the Save menu item is disabled. The Comparison Window does not save expanded
tests or marked pairs.
Opening a Comparison
From the Comparison window you can open a previously saved comparison.
From the File menu, select the Open Comparison menu item. The Open Comparison dialog opens.
In the Select name of the comparison configuration you would like to open field, select the name of the
comparison you want to open from the list.
To open the selected comparison, select the OK button. Chariot closes all open tests windows that are not part
of the comparison and then opens a Test window for each test in the comparison. The Comparison window
containing the selected comparison is displayed.
2.
press Ctrl+C
3.
After you have copied a pair, you can paste the pair into a Test window. When you access a Test window, the
Paste menu item under the Edit menu and the Paste icon on the toolbar are now available. You cannot paste
pairs into the Comparison window. After you have copied a pair, you can also paste text about the pair into a
text window such as Microsoft Notepad.
Select all pairs
To select all of the pairs in the Comparison window, go to the Edit menu and select Select all pairsor press
Ctrl+A. All of the pairs in the Comparison window are highlighted to indicate that they are selected.
Deselect all pairs
To deselect all of the highlighted pairs in the Comparison window, go to the Edit menu and select Deselect all
pairs. All pairs that were previously highlighted are no longer selected.
95
F2
get the help table of contents and an index of all the available Chariot help topics.
F3
F9
F10
F11
get the About Chariot dialog, which shows your version and build level, and lets you get
product support information.
Ctrl+A
Ctrl+C
copy the test setup for one or more pairs to the clipboard.
Ctr+O
Ctrl+S
save this comparison. If the comparison is untitled, the Save As dialog prompts you to
choose a filename.
Alt+F4
this key combination can be used to close any window or dialog box. When used to close a
dialog box, it has the same effect as pressing the Esc key or selecting Cancel with the mouse.
In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.
96
To open the Error Log Viewer, select the View Error Log menu item from the Tools menu on the Main
window.
The Error Log Viewer shows the record number of the entry, the date and time, the detector of the error and a
brief description of the error.
You can select the criteria for the entries that you want to view. See Filtering the Entries Displayed in the
Error Log Viewer for more information.
If you need to view an error log on an endpoint computer, use the FMTLOG program to format the binary error
log. See FMTLOGFormatting Binary Error Logs in the Using the Command-Line Programs on page 116
for more information.
Menu items in the Error Log Viewer:
"The File Menu (Error Log Viewer)" on page 97: to open a log, save a log, or exit the Error Log Viewer.
"The View Menu (Error Log Viewer)" on page 97 : to select the order of the entries shown in the Error
Log Viewer, select which entries to filter, search for a specific word or phrase, or view detailed
information about the entry.
"The Options Menu (Error Log Viewer)" on page 98 : to save settings on exit, wrap the log, or change the
Error Log Viewer font.
97
98
99
You can save the following settings to be used the next time you access the Error Log Viewer by selecting the
Save Settings on Exit menu item:
Filtering Settings
Font Settings
Sort Order
Wrap Settings
The Error Log Viewer saves the settings for these items that you have set when you select the Exit menu item.
The Font menu item lets you change the fonts used in the Error Log Viewer.
F2
F3
F9
F10
F11
get the About Error Log Viewer dialog, which shows your version and build level, and lets
you get product support information.
Ctrl+S
Ctrl+F
Ctrl+G
find the next occurrence of the last word you searched for in the Error Log Viewer.
Ctrl+O
Alt+F4
close any window or dialog. When used to close a dialog, it has the same effect as pressing
the Esc key or pressing Cancel with the mouse.
In addition to these keys, the Alt key can be used in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.
100
If you want to use the Script Editor as a standalone product, you can run the Script Editor by selecting the
Script Editor icon in the Application Scanner Folder.
If you want to use the Script Editor from Chariot, you can access the Script Editor two ways.
If you want to edit a script and have the changes to the existing script used by all pairs, select the Edit
Scripts menu item from the Tools menu on the Main window. Also select this menu item if you want to
create a new script and you want the script to be available to all pairs.
If you want make changes to an existing script and have the option of saving the changes with a specific
pair, highlight the pair in the Test window and select the Edit menu item. From the Edit an Endpoint Pair
dialog, press the Edit this script button. The Script Editor is shown. Note that you can also save the
changes to a file that can be used by other pairs if you access the Script Editor from this dialog.
The main window of the Script Editor shows the commands for the script and a list of the scripts variables.
In the top half of the window, Endpoint 1s portion of the script is shown on the left; Endpoint 2s on the right.
These are sequential lists of the commands (and their parameters) to be executed by the endpoints. You can
look at a long script by scrolling through it. You can access a dialog that lets you edit the highlighted
commands parameters one of three ways:
1.
2.
select the Edit parameter menu item from the Edit menu
3.
highlight a command and click the right mouse button and then select the Edit menu item
The lower half of the window summarizes the script variables. You can access a dialog that lets you edit the
highlighted variable one of three ways:
1.
2.
select the Edit variable menu item from the Edit menu
3.
click the right mouse button and then select the Edit menu item
Commands in the File menu let you handle script files and exit the editor. The Edit menu contains commands
that operate on the currently selected script commands or variables. To insert a command in a script, select a
command (or group of commands) in the top half of the window, and then choose the command to be inserted
from the Insert menu. The new command is inserted after (or around) the selected command(s).
The Application Script Name field shows a brief (40 character) description of the script. This script name is
required; it is important for identifying the script in other Ganymede Software products. If you are creating a
new script, be sure to enter descriptive information.
The toolbar provides a shortcut to the most commonly used menu items. You can move variables up and down
in the list. The Swap icon lets you move the selected command to the other endpoint. Use the Insert icons to
insert commands into the script
101
2.
3.
select the Edit Parameter menu item from the Edit menu
2.
3.
102
The Variable help field provides details about this variable and how to use it in the script. You can customize
the help text by entering information in this field.
Press the Reset button to return the value in the Current value field to the value that was in the Default value
field the last time you exited the Edit Variable dialog for this variable.
2.
press Ctrl+O
The Open a Script dialog is shown. Select the script that you want to open. The script is shown in the Script
Editor. You can then modify the script and save the script.
You can exit the Script Editor in two ways:
1.
2.
press F3
If you have modified the current script and not saved your changes, a message box is shown asking if you want
to save your changes to the current script.
103
Enter a brief (40 character) description of the script in the Application Script Name field. This script name is
required; it a very important field in future versions of Ganymede Software productsbe sure to enter
descriptive information.
When editing script parameters, you can change their names and the variables included in the parameter. See
Editing a Parameter of a Script Command for more information.
When editing script variables, you can change their names, their current and default values, and their
comments. See Editing a Script Variable for more information on editing variables.
For a full description of the script commands and their parameters and the rules for governing the creation of
valid scripts, see the Messages and Application Scripts manual.
To save the script, select the Save menu item from the Tools menu.
Saving a Script
Saving Scripts from the Standalone Script Editor
If you are using the Script Editor as a standalone product and want to save your changes with the same script
file name, select the Save menu item from the File menu. If you have not previously saved this script, the
Save Script File As dialog is shown.
If you want to save your changes under a new file name, select the Save As menu item from the File menu.
The Save Script File As dialog is shown. Enter or select the filename.
go to the File menu and select the Save to pair menu item
2. press Ctrl+S
The Script Editor saves the modifications to the pair level. Note that the file name shown in the title bar
does not change. If you are saving a new script, Untitled is shown in the title bar.
File Level
The Script Editor lets you modify a script and have those modifications available to new pairs created after
you modified the script. If you previously associated the script with a pair, your modifications will not be
reflected in the version of the script associated with the pair. To have the modifications reflected in
existing pairs, you must reattach the script to the pair.
104
If you access the Script Editor from the Edit a Pair dialog and want to save a script at the file level, select
the Save As menu item from the File menu. Enter or select the filename you want to save the script as and
press the OK button. The Script Editor saves the modifications to the script on the file level. The
filename of the script is shown in the title bar.
If you access the Script Editor from the Tools menu and want to save your changes with the same script
file name, select the Save menu item from the File menu.
If you access the Script Editor from the Tools menu and want to save your changes under a new file name,
select the Save As menu item from the File menu. The Save Script File As dialog is shown. Enter or
select the filename.
2.
press Ctrl+Z
Redo
You can reverse your last undo and return the script to the state before you selected the Undo menu item. This
menu item is only available when your last action was an Undo.
You can redo actions in two ways:
1.
2.
press Ctrl+E
Delete
You can delete commands and variables from a script. First, highlight the command or variable you want to
delete by clicking on the command or variable. Once selected, you can delete the command or variable in two
ways:
1.
2.
The command or variable is deleted from the script and is not shown in the Script Editor.
Move Up
You can move variables up in the sequence of commands in a script. To keep the script valid, the Move Up
menu item and Move Up icon are only available when moving the highlighted variable up is a valid move. In
some cases, using the Move Up function may cause the highlighted variable to move up to the next valid place
in the script or may cause other variable to move.
105
First, select the variable that you want to move up by clicking on the variable. Once selected, you can move the
variable up in three ways:
1.
2.
press Ctrl+Up
3.
Move Down
You can move variable down in the sequence of commands in a script. To keep the script valid, the Move
Down menu item and Move Down icon are only available when moving the highlighted variable down is a
valid move. In some cases, using the Move Down function may cause the highlighted variable to move down
to the next valid place in the script or may cause other variables to move.
First, select the variable that you want to move down by clicking on the variable. Once selected, you can move
commands up in a script in three ways:
1.
go to the Edit menu and select the Move down menu item
2.
press Ctrl+Up
3.
Swap Sides
You can move a command to the opposite endpoint or switch a command pair. For example, if you highlight
SEND/RECEIVE and use the swap functionality, the command pair is now RECEIVE/SEND. This
functionality is only available for certain commands.
First, select the command that you want to move to the other endpoint by clicking on the command. Once
selected, you can swap sides in three ways:
1.
go to the Edit menu and select the Swap sides menu item
2.
press Ctrl+W
3.
Edit Parameter
You can change a commands parameters and assign the command as either a constant or a variable. See
Editing a Script Variable for more information on editing parameters.
First, select the command that you want to edit by clicking on the command or a parameter for the command.
You can edit parameters in three ways:
1.
go to the Edit menu and select the Edit parameter menu item
2.
3.
106
First select, the variable that you want to edit by clicking on the variable. You can edit the variable in three
ways:
1.
go to the Edit menu and select the Edit variable menu item
2.
3.
Description
CONNECT
Sends a buffer of the size and type you specified from Endpoint 1 and
receives data at Endpoint 2.
Sends a buffer of the size and type you specified from Endpoint 2 and
receives data at Endpoint 1.
FLUSH at Endpoint 1
FLUSH at Endpoint 2
SLEEP at Endpoint 1
SLEEP at Endpoint 2
LOOP
Highlight the location in the script where you want to insert the command or the Group of Commands to insert
around. From the Insert menu, select the command you want to insert in the script.
All scripts must adhere to specific rules. See the Messages and Application Scripts manual for more
information. Only the commands that can be inserted at the selected location in the script are available.
107
Enter
F1
F2
F3
F9
F10
F11
get the About dialog, which shows your version and build level, and lets you get product
support information.
Ctrl+E
redo the last operation to the scriptassuming youve just chosen Undo.
Ctrl+N
set up a new script. The New Script dialog is shown. You can add a new script based on
five templates.
Ctrl+O
Ctrl+S
save a script file, using the filespec shown on the titlebar. If the script is still untitled, the
Save Script File As dialog lets you choose a path and filename for the script.
Ctrl+W
swap the sides for the currently-highlighted script commands. That is, move the Endpoint
1 command to Endpoint 2, and move the Endpoint 2 command to Endpoint 1.
Ctrl+Z
Ctrl+
Down
Arrow
move the currently-highlighted script variable one row lower in the list of variables. You
cannot move the port_number variable from the bottom of the list.
Ctrl+Up
Arrow
move the currently-highlighted script variable one row higher in the list of variables. You
cannot move the port_number variable from the bottom of the list.
Alt+F4
this key combination can be used to close any window or dialog. When used to close a
dialog, it has the same effect as pressing the Esc key or pressing Cancel with the mouse.
In addition to these keys, you can use the Alt key in combination with any underscored letter to invoke a menu
function. The menu function must be visible and not shown in gray. For example, pressing Alt+F shows the
File menu.
108
File Description
.AUD
An ASCII file found at the endpoints. File ENDPOINT.AUD contains a record for
each time a test is started and stopped. The records are in comma-delimited format,
allowing easy input into spreadsheet programs.
For more information, see the SECURITY_AUDITING and AUDIT_FILENAME
keywords for the ENDPOINT.INI file described in your Network Performance
Endpoint manual.
.CSV
An ASCII comma separated file, generated at the console. This contains the results
of a run, in a format suitable for loading into a spreadsheet program such as Excel or
Lotus 1-2-3.
See Export Options for CSV file on page 65 for information on exporting the CSV
file format from Chariot.
.DAT
An ASCII file, kept in the directory where the Chariot console is started.
DEREGISTER.DATcontains the 3 fields that are saved when Chariot is deregistered:
(QoS) templates
SPXDIR.DATa list of IPX addresses and their aliases
MCG.DATa list of IP Multicast groups
.ERR
An ASCII error log file generated internally by any of the Chariot programs. If the
file ASSERT.ERR is generated, this could indicate a program defect which may
affect the operation of Chariot. Keep a copy of the file and refer to the Ganymede
Software Customer Care chapter on page 163 for information on how to report the
problem.
.GIF
A binary graphics file. If you export in HTML format and choose to export graphs
of your results, each graph is saved in a separate file. Files in the GIF format are
suitable for loading into many word processors, graphics applications, and Web
browsers.
.HTM
An ASCII file with HTML tags. This is the default file extension for exporting
tests and results for use as Internet Web pages. You can view a formatted page
with any Web browser that supports tables. Imbedded graphs are saved as separate
GIF files.
.INI
.LCL
A binary file that determines the time and date format, the comma separation
format, and the type of money symbol to use based on the language of the version of
Chariot. This file must be in the directory where Chariot is installed for Chariot to
run.
.LOG
A binary log file generated by the console, RUNTST, CLONETST, or any endpoint.
See the Working with the Error Log Viewer section on page 96 for more
information.
.SCR
A binary script file, containing the network calls and their script variables. This file
is protected from damage with a CRC checksum so it should not be modified, even
with a binary editor. Chariot 3.1 can write script files in version 3.1 format or the
older version 2.2 format.
.TST
A binary test file, containing endpoint pair definitions and their associated scripts,
and, optionally, the results of one run. This file is protected from damage with a
CRC checksum, so it should not be modified, even with a hex editor.
109
Chariot 2.2 can write test files in version 2.2 or 2.1 formats. Chariot 2.2 test files
cannot be read by Chariot 2.1 programs.
.TXT
An ASCII listing file. This is the default file extension when exporting tests and
results to a text file.
.WK3
A binary spreadsheet file, generated at the console. This contains the results of a
run, in a format suitable for loading into a spreadsheet program such as Excel or
Lotus 1-2-3. This file format will not be supported in the next release of Chariot.
You can also use the CSV file format to export tests to a spreadsheet program.
Only one program at a time can write to a Chariot test file, to ensure the integrity of the data in the file.
Chariot protects your files and ensures that while a test file is open, other programs cannot write to the file.
When script files are opened, they are read directly into the test being constructed or modified. A test file
contains a separate script for each endpoint pair, allowing full flexibility in the choice of script variables.
Scripts are stored in a compact, binary format, so script files and test files (without results) are rarely large.
Test and script files are protected from damage by a checksum. This makes modifying them with a binary or
hexadecimal editor impractical.
110
111
Each of these commands writes information to the screen, using stdout. You can redirect this information to a
file, using the > or >> operators. If you choose to redirect the output to a file, you can print the file or
manipulate it with an ASCII text editor.
See the Chariot Programming Reference for information on the Chariot API which provides you with the
ability to automate testing.
RUNTSTRunning Tests
The program named RUNTST lets you run test files created by the Chariot console program.
Heres the syntax of the RUNTST command:
RUNTST test_filename [new_test_filename] [-tN]
Enter RUNTST at a command prompt on the computer youre using as the console.
Its second parameter is optional; you can supply a separate filespec as a target for the test setup and results.
If the second parameter, new_test_filename, is omitted the results are written directly to the original test
file.
The t (timeout) parameter is optional. If specified, this parameter causes RUNTST to stop running the
test after N seconds.
For example, heres how to run the Chariot test contained in a file named FILEXFER.TST, and write the
results back into that file:
RUNTST TESTS\FILEXFER.TST
The RUNTST program runs until the timing records are returned by all Endpoint 1 computers participating in
the test.
Use the FMTTST command to read the binary results data in a test file and produce a formatted listing.
112
While it is running, RUNTST shows its progress by writing to stdout, so you can see what is going on. You
can stop a running test by pressing Ctrl+C or Ctrl+Break; RUNTST will ask you if you really want to exit. If
you answer with a Y, RUNTST directs the endpoints to stop the test. If the stopping seems to take excessively
long, press Ctrl+C or Ctrl+Break again to exit the program altogether. (However, calling it from normal batch
files on Windows NT works as you would expect.)
RUNTST does not poll endpoints, even if polling is defined in the test file.
If RUNTST reads a test file from an older version, it writes out its test setup and results in its current version.
For example, if you have a test that was created at version 1.x and you run the newest version of RUNTST, the
file thats written will be in the newest versionwhich cannot be read by older versions of RUNTST and
FMTTST. If youd like to continue using older versions of RUNTST, be sure to make an extra copy of the test
file (or write to a new_test_filename)so you still have a copy of your original.
The RUNTST and Chariot console programs can generally be loaded at the same time, but only one of them
can be running a test at a time. However, if youve changed your Reporting Ports on the Firewall Option tab
on the Change User Settings notebook to a value other than AUTO at the console, RUNTST cant run while
the console is loadedexpect to see message CHR0264 at RUNTST.
The RUNTST program is installed at the console.
The tst_filename parameter is the name of the Chariot test file to be formatted. The output_filename is the file
that all test output is written. If no output_filename is supplied, output is directed to stdout. The template
name parameter represents a template containing print/export options created at the Chariot console. See
Working with Output Templates in the Operating the Console chapter for more information on working with
output template.
113
-h
Creates HTML output. This flag controls the format of the output. If you use this flag, you cannot
use the s flag.
-v
Creates a comma separated output (with file extension .CSV). You can select which aspects of the
tests to export by specifying the specific CSV options described below. If you use this flag without
specifying CSV specific flags, the entire contents of the test are used to create the output. If you use
this flag, you cannot use the c or t flag to specify the print/options for the results.
-s
Creates spreadsheet output (with file extension .WK3). This flag controls the format of the output.
When you use this flag, the entire contents of the test are used to create the output. If you use this
flag, you cannot use the c or t flag to specify the print/options for the result.
-c
Generate the output according to the export configuration last used in the Chariot console. The c
switch exports the test to text format, or, in combination with the -h switch, to HTML, using the
custom configuration settings that were last selected at the Chariot console. This is useful in limiting
the output to the exact data you are interested in. This flag controls what print/export options to use
for the results. If you use this flag, you cannot use the s flag.
-t
Creates output based on the print/export options saved in an output template. Enter the name of the
output template after this parameter. This flag controls what print/export options to use for the
results. If you use this flag, you cannot use the c flag.
-q
To generate spreadsheet output in the CSV file format, supply the -v flag
To generate spreadsheet output in the WK3 file format, supply the -s flag
To generate output based on the print/export options saved in an output template, supply the -t flag and the
name of the output template you want to use
For example, to see the formatted results for the Chariot file TESTS\FILEXFER.TST on the screena screen at
a timeenter:
FMTTST TESTS\FILEXFER.TST | more
You can also use FMTTST to create Web pages containing test results. The output files contain the HTML
tags needed for any Web browser that supports tables. Use the -h flag to generate HTML output. For
example:
FMTTST -h TESTS\FILEXFER.TST >C:\WEB\XFER1.HTM
Graphs are exported and linked to the HTML Web page with the GIF file format. GIF files are written to the
current directory. The following filenames are used for the GIFs.
Graph Type
Filename
throughput graph
<testname>_throughput.gif
transaction graph
<testname>_trans_rate.gif
response graph
<testname>_resp_time.gif
You can use FMTTST to create spreadsheet output to either the CSV file format or the WK3 file format.
These file formats can be read by modern spreadsheet programs, such as Excel or Lotus 1-2-3. Export to the
WK3 file format will not be supported in the next release of Chariot.
114
When you export to the CSV file format, you can use the CSV flags to specify which aspects of the tests that
you want to export. If you do not set one of these flags, all aspects of the test are exported. These flags can
only be set when using the v flag.
Here are the FMTSTST CSV options:
CSV
option
-r
-s
Provides information contained in the Test Setup tab of the Test Window
-d
See Export Options for CSV file on page 71 in the Working with the Console chapter for more information
on the CSV file format.
To export the test results to the WK3 file format, enter:
FMTTST -s TESTS\FILEXFER.TST
2.
CLONETST reads the template pairs from your test file, and creates a third file. This third file is a Chariot
test file, created by replacing the network addresses in the first file with those it read from the second file.
115
For example:
CLONETST input.tst clone.lst output.tst
You specify three items in each line of the second file (named CLONE.LST in this example):
The Endpoint 1 and 2 network addresses, used to replicate the original pair
These three items must be specified together on a line within the input file, separated by spaces. Blank lines in
this file are ignored.
The pair number is the sequential pair number in the list. If you have added or deleted pairs in the test, the
pair number may not match the sequential pair number.
You cannot use CLONETST to build tests with multicast groups. Use the console to build tests with multicast
groups.
Heres an example line in a Clone List file:
1
NewFromName
NewToName
Heres a more advanced example, showing how you might create a test with two endpoint pairs, but use
CLONETST to produce a test file with four endpoint pairs.
If the original test file contained three pairs, such as the following:
1
2
3
GANYMEDE.100
44.44.44.22
33.44.55.66
GANYMEDE.90
44.44.44.65
33.44.55.77
APPC
TCP
UDP
CREDITL.SCR
FILERCVL.SCR
FTPGET.SCR
etc...
etc...
etc...
APPC
APPC
TCP
APPC
CREDITL.SCR
CREDITL.SCR
FILERCVL.SCR
CREDITL.SCR
etc...
etc...
etc...
etc...
MYNET.A
MYNET.C
11.11.11.11
MYNET.E
MYNET.B
MYNET.D
22.22.22.22
MYNET.F
MYNET.A
MYNET.C
11.11.11.11
MYNET.E
MYNET.B
MYNET.D
22.22.22.22
MYNET.F
You can see that weve used CLONETST to prune the UDP pair from the OUTPUT.TST file above. A clever
way to use CLONETST is to build an original file containing each of the different combinations of protocols
and scripts you plan to use. The CLONE.LST file lets you then create tests on-the-fly, assigning addresses to
pairs. You can omit the pairs you dont need for a given test.
The CLONETST program is installed at the console.
116
The FMTLOG program is installed at the console and at the endpoints. See the Performance Endpoint manual
for additional information on running FMTLOG on the platform you are using.
117
The Summary, Run Options, and Test Setup Section on page 117 providing an overview of how the test
was set up, and when it was run.
The Test Totals Section on page 118 showing totals, average, minimum and maximum for throughput,
transaction rate, response time, and streaming results. The datagrams, endpoint configuration and raw
data totals are also shown.
A set of Confidence Intervals on page 128 showing detailed information about each endpoint pair and
showing all the timing records for that pair.
This subsection shows information about when (if at all) the test was run, and how long that run took.
Run start time and Run send time mark the date and times when the test was started at the console and when
the test was ended; this doesnt count the time spent formatting results. The Elapsed time is the difference, in
seconds, between the start and end times.
In the throughput, transaction rate, and response time results for each pair, Chariot shows the Measured time.
The Measured Time is the sum of the times in all the timing records returned for that endpoint pair. Thus, the
Measured Time for any endpoint pair is always less than the total elapsed time for a test.
118
RUN OPTIONS
End type
Duration
Reporting type
Automatically poll endpoints
Polling interval (minutes)
Stop run upon initialization failure
Connect timeout during test (minutes)
Collect endpoint CPU utilization
Validate data upon receipt
Use a new seed for random variables on every run
Datagram window size (bytes)
Datagram retransmission timeout (milliseconds)
Datagram number of retransmits before aborting
Receive Timeout (milliseconds)
Time to Live (Hops)
This subsection shows the Run Options used for this test. If datagram protocols were used for any of the pairs,
the datagram options are also shown. See Changing the Default Run Options and Changing Your
Datagram Parameters in the Operating the Console chapter for detailed information on each of these values.
TEST SETUP (ENDPOINT 1 TO ENDPOINT 2)
Group/
Pair Endpoint 1
---- --------------------SPX
1
00000002:00a024cc3d29
2
00000002:00a024cc3d29
TCP
3
test13
4
test13
Network Service
Endpoint 2
Protocol Quality Script name
--------------------- -------- ------- ----------00000002:00a024cc3f55 SPX
00000002:00a024cc3f55 SPX
n/a
n/a
filesndl.scr
filesndl.scr
44.44.44.119
44.44.44.119
n/a
n/a
filesndl.scr
filesndl.scr
TCP
TCP
This section shows the test setup for each pair. The test setup includes the group, the endpoint pair number,
the Endpoint 1 and 2 network addresses, the network protocol, the service quality (if any), and the script used
by each endpoint pair. See Adding or Editing an Endpoint Pair in the Operating the Console chapter for
details on each of these fields.
TEST SETUP (CONSOLE TO ENDPOINT 1)
Group/
Pair
-----SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4
Console Knows
Endpoint 1
----------------------
Console
Console Service Pair
Protocol Quality Comment
-------- ------- -------
00000002:00a024cc3d29
00000002:00a024cc3d29
SPX
SPX
n/a
n/a
44.44.44.56
44.44.44.56
TCP
TCP
n/a
n/a
This section shows how the console connected to Endpoint 1, for each pair. The portion of test setup includes
the group, the endpoint pair number, the network address by which the console knows Endpoint 1, the network
protocol, the service quality (if any), and descriptive comment for this pair (if one was entered). See Adding
or Editing an Endpoint Pair in the Operating the Console chapter for details on each of these fields.
119
If you are exporting pairs that are part of a multicast group, it may appear that the totals do not include an
aggregate of all the pairs. That is because for multicast, some of the data should only be counted once. For
example, if 5 pairs of a multicast group have a throughput of 5 Mbits/second, the actual effect on the network
is not 25 Mbits/sec, instead it is only 5 Mbits/second. This is because although the data is sent only once, 5
different endpoints received the data and reported the throughput.
Multicast pairs may be counted more than once in group totals if they appear in different groups based on the
grouping you have chosen, for example, Group by Endpoint 2. In these cases the multicast information may be
counted once for each of the group totals but will only be counted one time for the test total.
In the following sections, you are informed when multicast pairs are not counted individually in the totals.
Throughput
Minimum
(Mbits/sec)
----------134.886
146.413
134.886
887.793
887.793
948,128
134.886
Throughput
Throughput
95%
Maximum
Confidence Measured
Relative
(Mbits/sec) Interval
Time
Precision
----------- ---------- ---------- ---------4,245.966
3,906.289
730.602
2.928
43.810
4,245.966
774.590
3.122
44.219
7,512.094
5,425.401
292.901
3.111
9.521
7,512.094
307.097
3.117
9.802
7,512.094
TRANSACTION RATE
Group/
Pair
------SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4
Totals:
Transaction
Rate
Average
----------35.014
17.077
17.937
63.583
31.501
32.082
98.597
Transaction
Rate
Minimum
----------1.381
1.499
1.381
9.091
9.091
9.709
1.381
Transaction
Rate
Maximum
-----------43.478
40.000
43.478
76.923
55.556
76.923
76.923
Transaction
Rate 95%
Confidence
Measured
Relative
Interval
Time
Precision
------------ ---------- --------7.481
7.932
2.928
3.122
43.810
44.219
2.999
3.145
3.111
3.117
9.521
9.802
120
RESPONSE TIME
Group/
Pair
------SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4
Totals:
Response
Response
Response
Response Time 95%
Time
Time
Time
Confidence
Average
Minimum
Maximum
Interval
---------- ---------- ---------- ---------0.05716
0.02300
0.72400
0.05856
0.02500
0.66700
0.026
0.05575
0.02300
0.72400
0.025
0.03146
0.01300
0.11000
0.03174
0.01800
0.11000
0.003
0.03117
0.01300
0.10300
0.003
0.04431
0.01300
0.72400
Measured Relative
Time
Precision
--------- --------2.928
3.122
43.810
44.219
3.111
3.117
9.521
9.802
Here is a description of how Chariot calculates throughput transaction rate, and response time.
Throughput
The throughput is calculated with the following equation:
(Bytes_Sent + Bytes_Received_By_Endpoint_1) / (Throughput_Units) /
Measured_Time
Throughput_Units - the current throughput units value, in bytes per second. For example, if the
throughput units is KBps, the Throughput_Units number is 1024. In this example, the throughput units is
shown in the column heading as Mbits/sec, which is 125,000 bytes per second (that is, 1,000,000 bits
divided by 8 bits per byte). See Changing Your Throughput Units in the Operating the Console chapter.
Measured Time - the sum, in seconds, of all the timing record durations returned for the endpoint pair.
Transaction Rate
The calculations are shown in transactions per second. This is calculated as:
Transaction_Count / Measured_Time
Response Time
The response time is the inverse of the transaction rate. The calculations are shown in seconds per transaction.
This is calculated as:
Measured_Time / Transaction_Count
121
Maximum
The maximum value for an individual timing record.
95% Confidence Interval
Chariot has calculated that with 95% certainty, the real average is within the interval centered around the
Average value. For example,
THROUGHPUT
Throughput
Throughput Throughput Throughput
95%
Group/
Average
Minimum
Maximum Confidence Measured Relative
Pair (Mbits/sec) (Mbits/sec) (Mbits/sec)
Interval
Time Precision
----- ----------- ----------- ----------- ---------- -------- --------1
3.030
0.608
7.929
0.116
59.737
3.839
You can have 95% confidence that the real average throughput (if you were to run this transaction forever)
would be in the range 3.030 plus or minus 0.116 Mbits/sec. See Confidence Intervals on page 128 for
information about the calculation of confidence intervals.
It takes at least two timing records to calculate a Confidence Interval. If there is zero or only one timing record
for a pair, the Confidence Interval cannot be calculated, and n/a is shown instead.
Measured Time
The total of the measured times for all timing records produced by this pair. The value is shown in seconds.
Relative Precision
A statistical value indicating the consistency among the timing records for a pair. See Relative Precision on
page 129 for more information on the reliability of test results.
The Totals row summarizes the results for each column:
Average
The sum of all the pair averages (except for Response Time, which contains the average of the pair averages).
See Understanding Timing on page 129 for a discussion of how your aggregate throughput value can exceed
the capacity of the network on which youre running.
Minimum
The minimum of all the pair minimums.
Maximum
The maximum of all the pair maximums.
Jitter Data
When running a streaming script with the RTP protocol, the endpoints calculate the amount of jitter for each
timing record. Jitter is the statistical variance of the packet interarrival time. The jitter is measured for each
packet that is sent in a timing record. If only one packet is sent in a timing record, the jitter is zero. The jitter
is reset to zero at the beginning of each timing record. For more information on jitter, see Understanding
Jitter Measurements in the Working with Datagrams and Multimedia Support chapter on page 41.
122
In this example pair 1is part of a IP multicast group, pairs 3 and 4 are single unicast pairs running a streaming
script.
Group/Pair
Shows the group name and each pair within the group.
Average
Average jitter statistic of all timing records in the test.
Minimum
The lowest jitter statistic for an individual timing record in the test.
Maximum
The highest jitter statistic for an individual timing record in the test.
Lost Data
When running a streaming script, the endpoints keep track of lost data. Lost data is data that was discarded
and not received by the receiving endpoint. When Endpoint 1 has completed sending the data, it tells Endpoint
2 how much data was sent so the lost data totals can be calculated. It may be the case that the sender is so
much faster than the receiver that most or even all of the data is lost.
Here is an example of streaming data:
STREAMING DATA
Group/
Pair
-------My Group
Pair 1
Pair 2
Pair 3
No Group
Pair 4
Pair 5
Totals:
Bytes
Sent by
E1
--------1,404,000
1,404,000
1,404,000
1,404,000
1,000,000
500,000
500,000
2,404,000
Bytes
Received
E2
-------3,579,849
1,131,273
1,395,225
1,053,351
865,272
431,584
433,688
4,445,121
Bytes
Lost
E2
------632,151
272,727
8,775
350,649
134,728
68,416
66,312
766,879
%
Lost
E1->E2
------15.008
19.425
0.625
24.975
13.473
13.683
13.262
14.714
E1
Throughput
(KBytes/sec)
------------
Measured
Time
Relative
(secs)
Precision
-------- ---------
1,611.156
1,609.265
1,387.747
0.851
0.852
0.988
21.689
7.004
2.802
541.933
554.865
0.901
0.880
22.957
20.593
In this example pairs 1-3 are part of a IP multicast group, pairs 3 and 4 are single unicast pairs running a
streaming script.
Group/Pair
Shows the group name and each pair within the group.
Bytes Sent by E1
The count of the bytes of data send by Endpoint 1 in this pair. For the multicast group it is typical to see
the same value for all the pairs since the Endpoint 1 is sending the data once for all of the Endpoint 2s.
The value may be different if Endpoint 2 fails during the test or loses lots of data. Note that for the
multicast group, the total is not the aggregate total (since the data was sent only once).
123
Bytes Received E2
The count of the bytes of data received by Endpoint 2 in this pair. This is how much of the data was
successfully received by Endpoint 2.
Bytes Lost E2
The count of the bytes of data lost by Endpoint 2 in this pair. This value is Bytes sent by E1 - Bytes
received E2.
% Lost E1->E2
The percentage of the bytes of data lost by Endpoint 2 in this pair. This value is Bytes Lost E2 / Bytes
Sent by E1. Note that the aggregate value for this percentage uses an aggregate value of Bytes Sent by E1.
E1 Throughput
This is the throughput as viewed from Endpoint 1. This value may be greater than the value on the
throughput tab since it does not account for lost data. This value is calculated as follows:
(Bytes Sent by E1 / Throughput units) / Measured time
Measured Time
A total of the amount of time recorded by all of the timing records for this endpoint pair. This may differ
greatly from the amount of time the script was actually executing (that is, the elapsed time, which is shown
at the top of the results), depending on how much activity or SLEEPs the script performed outside of its
START_TIMER and END_TIMER commands. This value is shown in seconds.
Relative Precision
A statistical value indicating the consistency among the timing records for a pair. See Relative Precision
on page 129 for more information on the calculation of relative precision, and Understanding Timing on
page 129 for more information on the reliability of test results.
Endpoint Configuration
Following these summaries of the numerical results is a description of how the endpoints were configured
when the test was run. At the Chariot console, this Endpoint Configuration is shown in a separately-tabbed
area of the Test window. For example,
ENDPOINT CONFIGURATION
Group/
Pair
------SPX
Pair 1
Pair 2
TCP
Pair 3
Pair 4
E1
Operating
System
----------
E1
Chariot
Version
-------
E1
Build
Level
-----
E1
Product
Type
-------
E2
Operating
System
----------
E2
Chariot
Version
-------
E2
Build
Level
-----
E2
Product
Type
-------
Windows NT
Windows NT
2.2
2.2
xxx
xxx
Retail
Retail
Windows NT
Windows NT
2.2
2.2
xxx
xxx
Retail
Retail
Windows NT
Windows NT
2.2
2.2
xxx
xxx
Retail
Retail
Windows NT
Windows NT
2.2
2.2
xxx
xxx
Retail
Retail
The first column is again the group name or pair number. The next four columns describe Endpoint 1 for each
pair; the last four column describe Endpoint 2.
Operating System
The name of the operating system on which the endpoint is running.
Chariot Version
The Chariot version number for the endpoint. Newer versions have additional capabilities not available on
older versions.
Build Level
An exact identification of the internal build number used by Ganymede Software. This is helpful when
contacting us for service and support.
124
Product Type
Chariot is available in several forms, which interact in different ways.
219
111
108
224
112
112
443
219
111
108
224
112
112
443
21,900,000
11,100,000
10,800,000
22,400,000
11,200,000
11,200,000
44,300,000
Bytes
Measured
Received E1 CPU E2 CPU Time
Relative
by E1
Utiliz. Utiliz. (secs)
Precision
-------- ----------- ----------- -------219
111
108
224
112
112
443
39
55
55
39
9.8452. 2.172
9.841
3.744
55
39
39
55
9.830
9.882
4.232
2.722
125
Relative Precision
A statistical value indicating the consistency among the timing records for a pair. This is the same
number shown in the summaries for the throughput, transaction rate, and response time. See Relative
Precision on page 129 for more information on the calculation of relative precision.
00000002:006008Bf5ff8
00000002:006008168423
SPX
n/a
filesndl.scr
00000002:006008bf5ff8
SPX
n/a
The details for each configured endpoint pair are shown in their own section. The first part of each Endpoint
Pair section shows a number of configuration parameters and information about what happened during a run.
These are:
Endpoint 1
The Endpoint 1 network address used by this endpoint pair.
Endpoint 2
The Endpoint 2 network address used by this endpoint pair.
Network Protocol
The protocol used between the two endpoints.
Service Quality
The service quality used between the two endpoints; n/a if not used by this protocol.
Script Name
The filename of the script used between this pair of endpoints.
Pair Comment
The endpoint pair comment (an optional field).
Console Knows Endpoint 1
The configured name used by the console to contact the first endpoint. Depending upon the network
configuration and computer setup, this may be different from the Endpoint 1 field.
126
Console Protocol
The network protocol used between the console and the first endpoint.
Console Service Quality
The service quality used between the console and the first endpoint; n/a if not used by this protocol.
Endpoint 2
---------------------------------
CONNECT_ACCEPT
port_number=AUTO
LOOP
number_of_timing_records=100
LOOP
transactions_per_record=1
RECEIVE
file_size=100000
receive_buffer_size=DEFAULT
CONFIRM_ACKNOWLEDGE
END_LOOP
END_LOOP
DISCONNECT
127
Send and receive buffers can be set to the value DEFAULT. This tells an endpoint to use buffers that are the
default size for the network protocol being used. DEFAULT lets you use the default buffer size for each
protocol, without having to modify the script to handle protocol differences. The default value is different
depending on the protocol and platform being used. Chariot uses the most common value for each particular
environment.
Endpoint 1 Value
-------------------2.2
xxx
Retail
Windows NT
Supported
4
0
1381
Service Pack 3
1304848 (KB)
32763
1391
32767
32767
8183
8180
IBM PCOMM or Communications Server
1.0
Microsoft
2.2
2.2
Trans.
Rate
(#/sec)
------17.241
19.231
21.277
Response
Time (sec)
---------0.05800
0.05200
0.04700
128
Technical Details
This section discusses how Chariot calculates confidence intervals, determines the relative precision, and
generates timing records.
Confidence Intervals
Here is a formal definition of a confidence interval using statistical terms.
A confidence interval is an estimated range of values with a given high probability of covering the true
population value.
This is a quantification of the fact that Chariot is doing a sampling of the real (infinite) set of measurements.
If it could sample all of the possible measurements of a network (with infinite time and resources), it would be
100% sure that the calculated average is the correct value. Since Chariot always generates a smaller-thaninfinite set of measurements, there is going to be some doubt about whether it has really calculated an average
close to the real average.
To state the definition another way, there is a 95% chance that the actual average lies between the lower and
upper bound indicated by the 95% confidence interval.
Here is how the confidence interval is calculated:
1. Chariot first calculates the standard deviation of the measured time of the timing records.
2. It then calculates the standard error, which is the standard deviation divided by the square root of the
number of timing records minus one.
3. It then uses a statistical table to look up a "t" value, using the number of timing records minus one.
4. The confidence delta is the "t" value times the standard error.
5. This is a confidence interval for the average measured time, which is used to display confidence intervals
for each of the calculation types (throughput, transaction rate, and response time).
The effect of the value "t" is such that the larger the sample size, the smaller the confidence interval, all things
being equal. Thus, one way to shrink the confidence interval is to have the pair generate more timing records.
You may sometimes see a negative number in the lower bound of a 95% confidence interval. The statistical
calculations being used assume an unbounded normal distribution, which could contain negative samples. We
know you cant get negative numbers in real life (communications never goes faster than the speed of light).
Thus, when you get a negative number on the left-hand side of your confidence interval, you should have very
low confidence in the results of the test.
See How Long Should a Performance Test Run on page 142 in the Tips for Testing chapter for further
discussion of how to set up tests to obtain reliable results.
129
Relative Precision
The confidence interval is a well-known statistical measurement for the reliability of the calculated average.
Unfortunately, it does not provide a good mechanism to compare tests using different scripts that do different
things; you cant easily compare the reliability of two tests by looking at the confidence interval of a file
transfer script and the confidence interval of an inquiry script.
The Relative Precision is a gauge of how reliable the results are for this particular endpoint pair. Regardless of
what type of script was run, you can compare their relative precision values.
The relative precision is obtained by calculating the 95% confidence interval of the Measured Time for each
timing record, and dividing it by the average Measured Time. This number is then converted to a percentage
by multiplying it by 100. Thus, the lower the Relative Precision value, the more reliable the result. A good
Relative Precision value is 10.00 or less. On an empty LAN, you can get Relative Precision values of less than
1.00 on many tests. See How Long Should a Performance Test Run on page 142 in the Tips for Testing
chapter for further discussion of how to set up tests to obtain reliable results. See Confidence Intervals on
page 128 for information about the calculation of confidence intervals.
Chariot maintains its internal numerical values with more significant digits than those shown in the results. If
you were to calculate values like Relative Precision from the other numbers shown in the results, your
calculation may differ slightly from the numbers displayed by Chariot.
Understanding Timing
Three timing values show up throughout the results. Here are details of elapsed time measured time and
inactive time.
Elapsed Time
When referring to an entire test, the Elapsed Time is the duration from when the pairs started running
until they stopped. This value appears at the top of the printed and exported results, as well as in the
status bar at the bottom of a Test window.
When referring to a single timing record, the Elapsed Time is the time at the endpoint when the timing
record was cut. Time 0 is again when all the pairs completed initialization and its Run Status went to
Running. This value appears in the Timing Records Details.
Measured Time
When looking the results for a pair, the Measured Time is the sum of the times in all the timing records
returned for that endpoint pair. This may be less than the amount of time the script was actually executing
(that is, the elapsed time), depending on how much activity or SLEEPs the script performed outside of its
START_TIMER and END_TIMER commands. This value appears in the Throughput, Transaction Rate,
and Response Time results.
When referring to a single timing record, the Measured Time is the time measured between the
START_TIMER and END_TIMER commands in a script. This value appears in the Timing Records
Details.
130
Inactive Time
The Inactive Time is the time spent outside the START_TIMER and END_TIMER commands in a script.
This is time when an endpoint isnt doing work thats being measured. If this inactive time is more than
25 ms between timing records, it is shown; otherwise, inactive time is shown as blank for times below this
threshold of 25 ms. The inactive time for each timing record is shown in the Timing Records Details. See
Graphs and Timings on page 130 for more information.
The clock timers used when timing scripts are generally accurate to 1 millisecond, although this accuracy
depends on the endpoint operating system. For most tests, this is more than sufficient. If the transactions in a
test are too short, this timing accuracy can cause problems.
If the measured time of timing records drops, the resolution of 1 millisecond becomes a higher percentage of
the actual measured time. For example, if the measured time is 5 milliseconds, there is a +-10% potential for
error, since the actual time is somewhere between 4.5 and 5.5 milliseconds.
For repeatability and confidence, use:
See How Long Should a Performance Test Run on page 142 in the Tips for Testing chapter for information
about deciding how long to run a test and how long to make each timing record.
131
132
Go to Edit, Add value, and add TcpWindowSize as a REG_DWORD. The maximum value available is
64K. Matching the TcpWindowSize to the underlying MTU size minus the IP header should improve
efficiency. This means multiples of 1460, 1457, or 1452, depending on Ethernet implementation.
Correspondingly, in the application script change the send_buffer_size and receive_buffer_size to the
registry value of TcpWindowSize. Weve seen 40 times 1460 (that is, 58,400 bytes) give the best
throughput measurements.
133
134
check AUTO to let the endpoints assign the port number automatically.
uncheck AUTO to let you specify a port number.
Automatic assignment gives the best performance. AUTO is the preferred choice when testing with
multiple pairs. Otherwise, if the same port is specified for multiple pairs, the performance degrades, since
the pairs must share (serialize) the use of the port to run the test.
Uncheck AUTO, and enter a port number between 1 and 65535 if you are trying to emulate a specific
application. Specific port numbers are useful when testing devices that filter or prioritize traffic based on
their port number, such as firewalls or layer 3 switches. We recommend saving the script file with a new
name when you specify port numbersso you can easily reuse it with other pairs or in other tests.
Some restrictions:
Only one pair at a time can use a port number for each datagram protocol (that is, RTP, IPX, or
UDP). For example, only one pair at a time can use UDP with port number 1234 between an
endpoint pair; however, another pair can be using IPX with port number 1234.
Some endpoints allow only one pair at a time with the same port number. This restriction is based on
their internal tasking structure. These endpoints are HP-UX, Linux, and MVS. Expect to see
message CHR0264 if you attempt to use the same port number more than once in a test with one of
these endpoints.
RFC 1700 lists the registered port numbers (on the Web, see ftp://ftp.isi.edu/in-notes/rfc1700.txt). Here
are the categories of port numbers:
1 to 1023
1024
1025 to 5000
5001 to 65535
See Testing Through Firewalls for detailed information on setting the port numbers when using
firewalls.
In some tests, the amount of DNS latency required to translate hostnames can be significant. For
example on Windows NT, the following search can contribute to the time required to resolve a hostname:
1.
2.
3.
4.
5.
6.
7.
To omit DNS latency from your response-time results, use numerical IP addresses. For example, enter a
value like 10.10.10.88 in the Endpoint fields on the Add a Pair dialog.
The first two are error prone and can get confusing. The best and simplest solution is to define new
modes at any system that you will use for large APPC tests. By doing this, you have considerably more
flexibility in creating large tests.
Each APPC software stack provides a different mechanism for defining modes. Here is a snippet from an
NDF file in IBMs CM/2 that defines two modes with session limits of 1000. They define new modes
named BIGINTER and BIGBATCH, each with a session limit of 1000. They use the #INTER and
#BATCH classes of service so that you can still test the route calculation and priority differences in an
APPN or High Performance Routing (HPR) network.
DEFINE_MODE mode_name(BIGINTER)
max_ru_size_upper_bound(16384)
cos_name(#INTER)
min_conwinners_source(500)
plu_mode_session_limit(1000);
DEFINE_MODE mode_name(BIGBATCH)
max_ru_size_upper_bound(16384)
cos_name(#BATCH)
min_conwinners_source(500)
plu_mode_session_limit(1000);
135
136
Microsofts SNA Server comes pre-installed with a basic set of modes for APPC use. To get to the panel
in the SNA Server Admin where modes are defined, choose any local or remote LU, press the Partners...
button, then press the Modes... button. IBMs Personal Communications and Communications Server
software both come pre-installed with a basic set of modes for APPC use. To get to the panel where
modes are defined start the SNA Node Configuration program, select Advanced (if applicable) and select
Configure Modes. As an additional note, Communications Servers mode definitions for #INTER and
#BATCH are pre-installed with session limits of 8192 versus the normal 8.
137
Here is an example from an IBM CM/2 .NDF file which defines a link to a network node and specifies that
the link is secure:
DEFINE_LOGICAL_LINK
LINK_NAME(SECLINK)
ADJACENT_NODE_TYPE(NN)
PREFERRED_NN_SERVER(YES)
DLC_NAME(ETHERAND)
ADAPTER_NUMBER(0)
DESTINATION_ADDRESS(X40E650588C92)
ETHERNET_FORMAT(NO)
CP_CP_SESSION_SUPPORT(YES)
SOLICIT_SSCP_SESSION(NO)
ACTIVATE_AT_STARTUP(YES)
LIMITED_RESOURCE(NO)
SECURITY(GUARDED_RADIATION);
Using this trick means that you can test the different paths through the network by changing the mode
specified in the Chariot test. This is much simpler than reconfiguring your test network.
Using Aliases
The TCP/IP and APPC protocols already provide a mechanism for defining aliases or nicknames. You
can create your own alias directory in Chariot for the IPX/SPX protocols.
138
Chariot lets you take advantage of these aliases. For example, create a Chariot test using the network
addresses of TEST1 and TEST2. The console resolves these nicknames before starting a test. Using
this scheme, you can create tests that dont include any real LU names, IP addresses, or IPX addresses
just aliases. By changing the alias, you change which systems will run the test.
APPC
You can define a Partner LU Alias for each LU. If you define LUs in the network to have aliases of
TEST1 and TEST2, then the console resolves the aliases and can run the test.
TCP/IP
You can define nicknames for IP addresses by defining them in the file named HOSTS, in your ETC
directory. The following example illustrates creating an alias for TEST1 and TEST2:
44.44.44.60
44.44.44.62
TEST1
TEST2
IPX/SPX
Chariot stores aliases for IPX addresses in file SPXDIR.DAT (in the directory where you installed
Chariot). See Working with IPX/SPX Entries on page 56 in The Main Window section for
information on adding entries and making changes to this file.
44.44.44.101
44.44.44.103
44.44.44.104
00000002:006097c3f512
00000002:00a0247b58de
34242342:000000000001
Using CLONETST
Another option for changing network addresses is to use the CLONETST command-line program. It lets
you use one Chariot test file as a template for creating another. If you have a small number of tests that
you run, you can combine the different scripts, variables, and protocols in a master test file. Using this
file as input to CLONETST, you can easily create large tests, without using the console. You can even
create new tests programmatically by creating CLONETST input files and then executing CLONETST
from within your code. With batch programs and a command interpreter like Perl or REXX, you can
automatically create a flexible collection of tests. See CLONETSTReplicating Pairs in a Test on page
114 in the Using the Command Line Programs chapter for more on this powerful command.
These options are located in the Firewall Options tab on the User Settings notebook. See Changing Your
Firewall Options on page 54 in the Operating the Console chapter for more information.
139
User Settings notebook. Select a port number that passes the firewall port filtering. Endpoint 1 uses the
port number configured when returning test results.
Chariot Flow
10115
10117
Port Specification
There are three port specifications to pass through your firewall:
Test Setup Data
Chariot test setup data is sent from Endpoint 1 to Endpoint 2 using port 10115 for tests using the
TCP, RTP, or UDP protocol or port 10117 for tests using the SPX or IPX protocol.
Test Data
You typically need to configure a specific destination port number for Endpoint 2 in the application
script. Select a port number that passes the firewall port-filtering criteria.
Streaming Results
If your test is running a streaming script, the results are sent from Endpoint 2 to Endpoint 1 using
port 10115 (TCP) or port 10117 (SPX). Even though streaming scripts must use a datagram protocol,
the results are always sent back over TCP (for UDP or RTP) or SPX (for IPX).
In summary, the following ports must be allowed through the firewall:
Chariot Flow
10115
10117
10115
10117
Data Correlation
When you specify a port number in the application script, Endpoint 2 needs to be able to determine which
application script to process. For example, when multiple pairs use the same port number in the
application script, only one Endpoint 2 CONNECT_ACCEPT command can be outstanding at a time.
Endpoint 2 needs to determine which script Endpoint 1 is processing, so it must correlate the connection
data with an application script. This is only true for TCP and SPX protocols, datagram protocols are
restricted to one pair per unique port.
140
There are two settings in the Firewall Options for data correlation.
Use Endpoint 1 identifier in data
This is the default setting and it uses a four-byte correlator in the first data flow of every connection
(from Endpoint 1 to Endpoint 2). After Endpoint 2 receives the data, it determines the correct script
to execute. This option works for most firewalls. Always use this option if your firewall also
provides network address translation (NAT).
Use Endpoint 1 fixed port
This setting is only valid for TCP connections. Instead of sending a four-byte correlator, the
Endpoint 1 source port and address are used for correlation. This requires Endpoint 1 to use the same
source port for every connection in the test. For this reason, this is not a good option for NAT,
because the address and port number are typically translated. For short connections, this may cause
degradation of performance on some systems, since Endpoint 1 must reuse the same port and wait for
the TCP/IP stack to free the port. Use this option when your firewall acts as an application-level
firewall, that is, it inspects the actual data payload.
Some operating systems do not work with fixed source ports and always use the four-byte correlator.
These are:
For firewalls that inspect the data payload, specify the .CMP file to use in the application script (see
Creating Your Own User Data Files for detailed information). This file should contain the data
required to pass the specific filter, for example, an HTTP GET request. Chariot needs to know the exact
file size when creating a .CMP file. As Chariot sends a file of any size, it is important to specify the size
to send exactly on the byte boundary to avoid wrapping the data. Each SEND command grabs the first N
number of bytes. If the file is longer than N, the pointer is not reset to the beginning of the file. It
continues until the end of the file marker is reached and then it starts over at the beginning. If this is not
configured properly, your firewall sees data that looks incorrect.
File USER01.CMP contains the HTTP GET command (assume 404 bytes)
GET http://www.ganymede.com/support/chariot_technical_questions.htm
HTTP/1.0
If-Modified-Since: Tuesday, 07-Apr-98 20:39:41 GMT; length=16755
Referer:
http://www.ganymede.com/support/chariot_technical_questions.htm
Proxy-connection: Keep-Alive
User-Agent: Mozilla/4.05 [en] (WinNT; I)
Pragma: no-cache
Host: www.ganymede.com
Accept: image/gif, image/x-xbitmap, image/jpeg, image/pjpeg, image/png, */*
Accept-Language: en
Accept-Charset: iso-8859-1,*,utf-8
File USER02.CMP contains the HTTP server response and the actual file (assume 1020 bytes)
HTTP/1.1 200 OK
Server: Microsoft-IIS/4.0
Date: Tue, 21 Jul 1998 22:11:49 GMT
Content-Type: text/html
Accept-Ranges: bytes
Last-Modified: Tue, 07 Apr 1998 20:39:41 GMT
ETag: "92274d4a6562bd1:6457"
Content-Lenth
"Text file ----"
Configuration steps:
1.
2.
3.
4.
port_number: 80
size_of_reocrd_to_send: 404
control_datatype: USER01.CMP
file_size: 1020
send_datatype: USER02.CMP
Run the test
Do not run other software on the endpoint computers, and turn off screen savers
141
142
Throughput Testing
In addition, consider these recommendations when attempting to measure maximum throughput.
Use the scr iptFILESNDL.SCR when testing for maximum throughput.
This script sends 100,000 bytes from Endpoint 1 to Endpoint 2, then waits for an acknowledgement.
This simulates the core file transfer transaction done by many applications. Experiment with its
file_size variable. Weve seen that a good size is 1 MByte (10 times FILESNDLs default of 100,000
bytes), but 10 MBytes is worth experimenting with.
If using the TCP protocol, experiment with Setting the TCP Receive Window.
If using the FTPGET or FTPPUT scripts, dont set the number of repetitions greater than 1.
Streaming Testing
For streaming scripts, dont set your file size or buffer size too low when sending at a high data rate.
Small sizes cause Endpoint 1 to generate too many timing records. Large sizes avoid an aggregate
throughput value thats greater than the networks capacity (which can only occur, by the way, with nonzero SLEEP times). See in the Viewing the Results chapter on page 117 for more information.
Timing records are taken frequently enough to show fluctuations in performance during the test.
There arent too many timing records. Tests with more than 10,000 timing records use up
considerable memory and disk space, and make the Chariot GUI more cumbersome.
A high relative precision value means that you are either running in a network whose performance is
fluctuating, or that your test is too short. Assuming you dont want to change the network you are testing,
you need to make the test run longer. Most performance tests should last between two and five minutes.
You control the duration of a test by varying the number of endpoint pairs, the number of timing records
to generate, the number of transactions per timing record, and the amount of data sent in each transaction.
For the sake of simplicity, lets assume that the number of endpoint pairs and the amount of data are
fixed.
If you have one timing record for each transaction, you can see the performance fluctuations that occur
from transaction to transaction. In a large test running small transactions like the Credit Check script,
143
you could generate hundreds of thousands of timing records. This would require a tremendous amount of
memory at the console and could require a lot of disk space. On the other hand, if you only generated one
timing record for an entire test, the results would be simple to work with, but you would not be able to see
any variation in performance from transaction to transaction. The trick is to find a balance.
See Avoiding Too Many Timing Records on page 148 for information on reducing the number of
timing records and avoiding short timing records.
If you are trying to emulate a particular application, use the same connection type that the application
does.
If you are trying to test a network backbone or the equipment that supports one, use long connections.
Because the test endpoints dont have to go through the overhead of starting and ending the
connection, each endpoint pair can create considerably more traffic.
If you are trying to test a network protocol stack, run a mix of long and short connections.
144
145
Because RUNTST is run from the command line, it is easy to combine it with other networking tools to
build even larger, more complex tests. See RUNTSTRunning Tests on page 111 in the Using the
Command-Line Programs chapter for more information on its operation.
146
the ACK is sent, the 40-byte buffer is sent by Endpoint 2. This time, Endpoint 1s receipt of 1,500
bytes is satisfied and Endpoint 1 sends the 25 bytes with the piggybacked ACK. Endpoint 1 is
waiting the 200ms, so each timing record takes about 20 seconds.
We have discovered in our testing that protocol stacks on Linux, NetWare, and IBMs MVS operating
systems have implemented the delayed ACK algorithm. On other operating systems without this
algorithm, the script runs in less than 2 seconds, because the delays are reduced.
Protocol configuration and tuning
Changing a network windowing parameter or buffer size in any piece of network hardware or
software will vary the results you see.
Network Configuration
Modifying the settings on the routers and switches affects the computer performance. Changing the
physical configuration of the network can also produce inconsistent results.
Other active programs
If you have other programs active alongside the endpoint program; they compete for the system CPU,
affecting performance. This is especially noticeable with DOS and Windows applications.
Other network activity
Unless you are using dedicated wiring between endpoint programs, you may be competing for
network bandwidth with programs running on other computers. Competing traffic affects test results.
Weve also noticed that a network adapter can keep busy handling excessive broadcast or multicast
traffic in a network.
Console and endpoint together in one computer
The Chariot console and endpoint can reside together in the same computer. However, the endpoint
program is competing with the console for resources. We recommend not using the endpoint at the
console computer when doing serious performance measurements.
Batch vs. real-time reporting
Always use Batch reporting for serious performance measurements.
Foreground vs. background endpoint programs
Different operating systems behave differently in how they allocate resources to programs that run in
the foreground, in the background, as an icon, or as a service or detached. Often, this behavior is
tunable.
Available RAM and disk swapping
The amount of RAM in a computer affects program performance in many ways. Performance
degrades significantly whenever swapping to disk occurs (that is, there isnt enough physical RAM).
Screen savers
The screen savers many of us have were designed to be displayed when you were at lunch. They
consume CPU resources mightily when they kick in. Make sure they arent active on endpoint
computers while taking serious measurements.
147
To get consistent results between a pair of computers, every piece of relevant software must be at exactly
the same version and fix level. This includes the BIOS software, the operating system software, the
network device drivers, the network protocol stacks, and the Chariot software itself.
Data compression is done by the hardware or software in your network, and you want to test
performance with the compression activated.
148
Put the data in one or more files. Name the first file USER01.CMP, the second file USER02.CMP, and so
on. Put your data in the file starting at the beginning; no header is necessary. When sending the data, an
endpoint works sequentially through the file; when it reaches the end, it will wrap, appending the first
byte of the file to the last.
The same USERxx.CMP file must reside at all the endpoints you want to test with, and should be placed in
the CMPFILES subdirectory along with the other Ganymede-supplied .CMP files. Files with the same
name must contain the exact same data on each endpoint where they are used if data validation will be
used.
Performance Considerations
A USERxx.CMP file shouldnt be larger than 1 MByte. An endpoint loads the whole file into memory
when the test is started (to avoid disk I/O while the test is running), and it limits the size to one megabyte.
However, the file shouldnt be too small. The best data compression devices we know can adjust to
patterns in data up to 120 KBytes in length.
If your USERxx.CMP file is too large to fit in memory, youll encounter error message CHR0270. Try
increasing the limit placed by UNIX systems upon shared memory segments, as described below.
On AIX, Compaq Tru64 UNIX (Digital UNIX), and Solaris, the limit is the largest amount of
memory a process can allocate, which is determined by the amount of virtual memory.
On Linux, there is no way to configure a larger shared memory segment. The limit depends on the
implementation of Linux.
When you run a test for a fixed duration, an endpoint ignores the number_of_timing_records script
variable. This section includes discussions on Using Non-Streaming Scripts and Using Streaming
Scripts, because you change different script variables depending on the type of script.
If the test uses a non-streaming script and has no sleep times, or uses a streaming script and uses
UNLIMITED for the send_data_rate, an endpoint runs as many transactions during that time as it can.
If the test uses a streaming script, the number of transactions that Endpoint 1 runs is based on the
send_data_rate. When you run a test for a duration much greater than typically required, you greatly
149
increase the number of timing records the endpoints generate. Chariot becomes cumbersome when the
number of returned timing records is above 10,000.
For example, you may run a test with one endpoint pair which generates on your network 100 timing
records in 20 seconds. If you run the same test with the fixed duration set to one hour, Chariot generates
approximately 18,000 timing records. Additional pairs multiply the number.
In the Test window, press the toolbar Add pair/Edit pair icon. When the dialog is shown, you can
open or edit a script, and change the endpoint pairs addresses and protocols.
2.
In the Add or Edit an Endpoint Pair dialog, click on the Open a script file button, which opens a list
of scripts. Once you have chosen a non-streaming script, click on the Edit this script button.
3.
4.
Increase by ten times the Current Value variable for the transactions_per_record. For example, if
the number of transactions_per_record is 50, change the number to 500. Save your changes by
pressing the OK button on the open dialogs.
5.
From the Run menu, select the Set run options menu item. Click the Run for a fixed duration
button. In the Duration field, select one (1) minute.
6.
Run the test and view the results. Using the Raw Data Totals tab, look at the Number of Records.
This is the total number of timing records generated in one minute. We recommend about 50 to 100
records per pair for good statistical significance. If the results contain more than a total of 10,000
timing records, go back and further increase the transactions_per_record. Otherwise, continue to
the next step.
7.
Increase the scripts transactions_per_record variable to match the duration of your test:
Example:
In the steps above, you entered 500 for the scripts transactions_per_record variable and ran the test for
one minute. The results should have been about 300 timing records per pair (100 per 20 seconds = 300
per minute). 300 timing records per pair is a fine number if you are not running hundreds of concurrent
endpoint pairs.
Now you would like to run the test for a weekend (48 hours). Consider the math:
transactions_per_record = (60 minutes per hour) x (48 hours) x (500 transactions per record)
transactions_per_record = 1,440,000
150
Open the script and change the transactions_per_record variable to 1,440,000. Then, change the
Duration to 48 hours in the Set Run Options dialog.
For example, if you are using REALAUD.SCR with the script default parameters, you can expect each
timing record to take about 1.39 seconds: (14,040 bytes * 8)/80,736 bps.
Use the following formula to determine the value to use for the file_size variable:
file_size = time of timing record / (# of timing
records*(send_data_rate*8))
If you are running a script at full speed, that is, with the a send_data_rate set to UNLIMITED, heres a
way to reduce the number of timing records by a tenth:
1.
In the Test window, press the toolbar Add pair/Edit pair icon. When the dialog is shown, you can
open or edit a script, and change the endpoint pairs addresses and protocols.
2.
In the Add or Edit an Endpoint Pair dialog, click on the Open a script file button, which displays a
list of scripts. Once you have chosen a streaming script, click on the Edit this script button.
3.
An Edit a Script dialog is shown. Double-click on the file_size variable on the bottom half of the
dialog.
4.
Increase by ten times the Current Value for the file_size. For example, if the number of file_size is
3,000 bytes, change the number to 30,000.
5.
From the Run menu, select the Set run options menu item. Click the Run for a fixed duration
button. In the Duration field, select one (1) minute.
6.
Run the test and view the results. Using the Raw Data Totals tab, look at the Number of Records.
This is the total number of timing records generated in one minute. We recommend about 50 to 100
records per pair for good statistical significance. If the results contain more than a total of 10,000
timing records, go back and further increase the file_size variable.
151
600,000 causes a delay of 600 seconds (ten minutes). Set the Run Options to show results in Real-time.
When you start the test, you will see the results updated every ten minutes. If you stop the test, you can
see the endpoint details to find how your network performance changes during the time of your test.
Ganymede Softwares PegasusTM family of products provides end-to-end performance monitoring and
reporting from an end-user perspective. Pegasus alerts network managers of performance problems before
their users complain., and provides unique information about network performance. While other tools
look into the network, pointing to problems on specific segments or ports, Pegasus views performance
through the network, providing a single, accurate view of network response time and connectivity.
Pegasus works well with Chariot to provide the best enterprise performance management solutions for
corporate enterprise networks. Use Chariot to predict performance and select equipment, then use
Pegasus to monitor the network. Once you detect performance problems, use Chariot again to test the
network after changes and establish baselines for Pegasus monitoring.
152
Troubleshooting
153
Troubleshooting
Its possible that youll run into problems running Chariot. Problems are generally related to how the
communications software is set up and how the tests are configured.
This chapter helps you find the information necessary to solve problems you encounter. It also describes some
common problems, and how to solve them.
Also, see the Working with the Error Log Viewer section on page 96 for more information.
Press the Message help button at the bottom of this dialog. This is probably the most important button in
Chariot! Weve worked hard to make the Message Help thorough, accurate, and helpful. We welcome your
feedback on ways we can continue to improve the help thats provided.
Sometimes there is secondary error information shown in this dialog, to provide further isolation for the
problem.
The Show details button gives advanced technical information about the problem. For example, it shows the
return code number for failed communications calls.
For SPX and TCP problems, it shows the port number and call number, among other values.
All communications errors reported to the console, or that the console detects, are written to the consoles error
log. The path and name for this error log file is shown at the bottom of the error message dialog.
See the Messages and Application Scripts manual for a numerical listing of the Chariot error messages.
154
If the error was detected by Endpoint 1 or Endpoint 2, check the Test Setup at the console to determine the
actual network address of the computer where the error was detected. All Operator Actions described by the
message help should be taken at the computer which detected the error, unless otherwise specified.
Although one computer may detect an error, the solution may actually lie elsewhere. For example, if Endpoint
1 detects an error indicating that a network connection could not be established, it may be because there is a
configuration error in the middle of the network or at Endpoint 2.
Troubleshooting
155
Common Problems
Here are some possible problems you may encounter.
Insufficient Threads
The Chariot console creates one or more threads for each endpoint pair when running a test. This is in
addition to the threads created by the underlying network software (as well as those used by other concurrentlyrunning applications).
In our testing, we did not exhaust threads in our default settings for Windows NT until we reached about 7000
threads. We dont believe you should encounter out-of-threads problems with Windows NT; please let us know
if you do.
Windows 95 and Windows 98 are much more severely limited in their thread capacity.
Insufficient Resources
If you receive an insufficient resource error while running Chariot, your computer has run out of the amount of
memory required to successfully run Chariot. You should close other applications that you currently have
running and then restart Chariot.
Assertion Failures
Chariot does a lot of internal checking on itself. You may see the symptoms of this checking with an
Assertion failed message. If you see this in a message at the console, it asks whether you want to Exit. The
best choice is Yeschoosing No probably results in a protection fault or yet another Assertion Failure. If
you choose No and you are able to continue (that is, the bug was minor), we recommend saving your test files
as soon as possible.
If you encounter an assertion failure, please write down the sequence of things you were doing when it
occurred. Chariot captures details related to the problem in an ASCII text file named ASSERT.ERR in your
Chariot directory. Save a copy of the ASSERT.ERR file, and send it back to us via e-mail.
156
Damaged Files
Binary files can be damaged (that is, truncated) if you copy them using wildcards at a command prompt. The
problem occurs when the hex character X1A is encountered. For example, the following command is the
wrong way to copy a binary script file for the database-update transactions:
COPY DB*.SCR TEMP.SCR
Doing a DIR for file DBASES.SCR and file TEMP.SCR should show that the COPY command has truncated the
file. The following command is also the wrong way to copy binary files:
COPY DB*.SCR TEMP.SCR /B
This creates a truncated file that is even a byte smaller than the previous copy. Heres the RIGHT way to copy
binary filesdont use wildcards!
COPY DBASES.SCR TEMP.SCR
Write down what you were doing when the problem occurred.
Most important is the sequence of steps you took, the keys you pressed or menu selections you made, and
the files you were working with.
2.
3.
Troubleshooting
157
where d: and path are the drive and path where you installed Windows NT.
For Windows 95/98:
For TCP/IP, copy the HOSTS and SERVICES files found in the following directory:
d:\path\WINDOWS
where d: and path are the drive and path where you installed Windows 95/98.
5.
6.
158
Troubleshooting
159
2.
The same computer uses V4.1 or V4.11 of Personal Communications for Windows NT,
3.
4.
Symptom: Test pair will initialize fine, go to running state, but never return any timing records (that is, it
will stay at 0 of XXXX timing records reported) and will never complete.
Solution: A fix is provided by IBM for this problem with Personal Communications for Windows NT. It
can be obtained from: ftp://ps.software.ibm.com/ps/products/pcom/fixes/v4.1x/winnt/ic16809
This fix requires the user to install several files in several different directories, which can become time
consuming if this fix is needed on multiple computers. Since IBM does not ship any installation aid for
this fix, Ganymede Software has written an installation .BAT file to help with the install. Download
INSTFIX.ZIP, unzip into the same directory where the IC16809.EXE file has been unzipped, and type
INSTFIX for directions.
160
Troubleshooting
For information on IBMs Communications Server for Windows NT, access the Web site at
http://www.software.ibm.com/network/commserver/support/fixes/fixes_csnt.html (fixes)
161
162
163
Customer Service
For any Ganymede Software product, call Customer Service for:
Upgrade orders
Registration problems
Product information
Referrals to dealers and consultants
Replacement of missing or defective parts (disks, manuals, etc.)
Information about technical support services
For questions about how to use our software, see the Troubleshooting Guidelines section below. In
addition, we keep the Ganymede Software Web site up-to-date with the latest information on all aspects of
our products.
http://www.ganymede.com/
If necessary, you can contact us at:
Ganymede Software Inc.
1100 Perimeter Park Drive, Suite 104
Morrisville, NC 27560-9119
888-GANYMEDE (888-426-9633) toll free in the USA
919-469-0997 (voice)
919-469-5553 (fax)
e-mail: info@ganymede.com
Troubleshooting Guidelines
Our customer care team is always happy to assist you with any problems you encounter. We recommend
you try the following steps before calling for assistance, as you can usually locate efficient solutions for
many common problems in the existing documentation:
1.
Check the Technical Support Web site. Our technical support Web site provides:
164
Review the Troubleshooting chapter in your product's User Guide. This chapter provides solutions
to many common problems as well as information about viewing the error logs and getting the latest
product updates and fixes.
3.
Review the README file. This file contains updated information which does not appear in this
version of the manual. It's a good idea to print this file and keep a copy close at hand.
Index
165
Index
A
abandon run, 82
Abandoned status, 84
about box, 60
About Chariot dialog, 60
ACK, 145
add
multicast group, 71
pair(s), 70
addresses
aliases for, 56
aggregate throughput exceeds capacity, 131
aliases, 137
for IPX addresses, 56
ALL CAPS text, 4
APING
testing APPC connections, 25
APPC, 20
bidding for sessions, 136
defining modes for large tests, 135
network address
for IBMs software for Windows NT, 21
for SNA Server, 21
session limits, 135
TP name for endpoints, 26
APPC mode name, 24
application script name, 100
Application scripts
Modifying parameters, 35
APPN, 26
APPN class-of-service (COS), 136
APPN testing, 136
ASSERT.ERR, 156
assertion failures, 155
AUD file, 108
audit log, 108
authorization key, 15
automation of tests, 150
avoiding too many timing records, 148
Axis Details dialog, 75, 76, 77
B
batch reporting, 79
Beta product type, 4
bidding for APPC sessions, 136
binary files, 156
broadcast
definition of, 38
bugs in IBM Communications Server for Windows NT,
158
bugs in IBM PCOMM for Windows NT, 159
bugs in SNA Server, 158
bugs in Windows NT IPX/SPX, 157
bugs in Windows NT TCP/IP, 157
bytes received, 124
bytes sent, 124
C
calculation
response time, 119
throughput, 119
transaction rate, 119, 120
Calgary Corpus
Web site, 147
change display fonts, 48
change display fonts on operations menu, 49
change user settings, 49
undo button, 49
change user settings menu, 48
changing software versions, 146
changing the Run Options, 78
Chariot
directory structure, 15
installation, 11
installation files, 15
package, 11
product types, 4
CHARIOT.LOG, 116
for service, 156
checksum, 109
choosing data types, 147
choosing how to end a test run, 78
choosing how to report timings, 79
Cisco IOS, 29
class-of-service (COS), 136
CLONETST, 114
using for flexible tests, 137
CLONETST.LOG, 116
for service, 156
CM_TP_NOT_AVAILABLE, 25
CM_TP_NOT_RECOGNIZED, 25
Collapse all groups menu item, 72
commas, in CSV format, 66
comma-separated values, 65
Communications Manager/2, 20
Communications Server, 20
comparison
166
opening, 94
saving, 93
Comparison window, 92
compatibility considerations, 2
confidence interval, 128
confidence interval, 95%, 128
in calculating relative precision, 129
ConLoser, 136
connect timeout, 80
connection network, 26
connections
short vs. long, 143
constant value, 101
ConWinner, 136
copy, 67, 69, 94
copying binary files, 156
copying error details to the clipboard, 154
copying test pairs, 114
Courier font text, 4
CPI-C return code, 25
CPU utilization, 80
CRC checksum, 108
creating tests, 47
csv file format, 65, 108
defaults, 53
cut, 66
D
damaged files, 156
DAT file, 108
datagram
fragmentation, 37
modifying parameters, 35
performance, factors affecting, 37
Retransmission Timeout Period parameter, 36
simulating applications, 35
Support, 33
Window Size, 36
Datagram
Number of Retransmits before Aborting parameter,
37
tab, 38
datagram applications, 34
datagram defaults, 51
datagram protocol applications, 19
Datagram tab, 88
deactivate HPR, 137
default directories, 50
default protocol, 49
default run options
changing, 50
default service quality, 49
delete, 67
Demo product type, 4
DEREGISTER.DAT file, 17
deregistration key, 17
deselect all pairs, 66, 94
detailed error information, 154
details
endpoint pair, 125
directories
default, 50
directory structure
for console, 15
distribution, 101
DNS Latency, 134
documentation
conventions, 4
online help, 4
domain name
for Windows NT, 27
double-quotes, in CSV format, 66
Dr. Watson, 155
DRWTSN32.LOG, 156
duration of tests, 142
E
edit
multicast group, 71
pair(s), 66, 70
script command parameters, 101
script variables, 101
scripts, 99
edit menu
(Comparison window), 94
elapsed time, 129
e-mail address, 163
end type, 78
endpoint
definition, 7
endpoint configuration, 123
endpoint configuration details, 127
Endpoint Configuration tab, 88
endpoint failures during a run, 80
ENDPOINT.DAT, 70
file description, 108
ENDPOINT.LOG, 116
for service, 156
ERR file, 108
Error detected status, 84
error log
CLONETST, 96
endpoints, 96
location, 96
RUNTST, 96
Error Log Viewer
filtering, 97
log
details, 98
opening, 97
overview, 96
searching, 98
error logs
formatting, 116
Error LogViewer
exporting, 97
Index
F
fairness, 143
file menu
(Error Log Viewer), 97
file type handling, 108
filtering the Error Log Viewer, 97
find
in the error log, 98
Finished status, 84
firewall options
for timing records, 54
Firewalls
Testing through, 138
flexible tests, 137
FMTLOG, 116
FMTTST, 112
format tests, 112
formatting error logs, 116
fully-qualified LU name
defined, 20
G
Ganymede Software
Customer Care, 163
Gbps defined, 52
getting consistent results, 145
getting started, 7
GIF file, 65, 109
GQOS, 29
graphs
as GIF files, 64
bar graph, 76
configuration, 75
histogram, 77
line graph, 75
lost data, 86
Group by menu item, 72
Group sort order menu item, 72
H
handling endpoint failures during a run, 80
167
I
IBM Communications Server for Windows NT, 21
IBM Personal Communications for Windows NT, 21
icons
Chariot, 14
icons on the Test window toolbar, 61
inactive time, 129
Information menu item, 72
INI file, 109
Initialized status, 83
Initializing status, 83
installation files
for console, 15
installing Chariot, 11
installing Chariot 3.1
over Chariot 2.2, 14
insufficient resources, 155
introducing Chariot, 7
IOS, 29
IP addresses, 134
IP Multicast, 42
IPCONFIG
for Windows NT, 27
IPX, 26
configuring Windows NT console, 27
using IPXROUTE on Windows NT console, 27
IPX address aliases, 56
IPXROUTE command
on Windows NT console, 27
J
jitter, 41
results, 121
Jitter tab, 87
K
kbps defined, 52
Kbps defined, 52
KBps defined, 52
keys help
Comparison window, 95
Error Log Viewer, 99
168
Main window, 60
Script Editor, 107
Test window, 91
L
LAN connections, 136
LAN environments with APPN, 26
LANG.LCL, 156
large APPC tests, 135
LCL file, 109
length of tests, 142
license code, 15
Local File, 156
locale file, 109
LOG file, 109
log files
formatting, 116
long connections, 143
loopback
not with SNA Server, 21
lost data, 86
Lost Data Tab, 86
Lotus 1-2-3, 108
lower distribution, 101
M
manual
conventions, 4
mark selected items, 66, 94
marking pairs and groups, 66, 94
Mbps defined, 52
measured time, 129
in confidence interval calculation, 128
in raw data totals, 124
in throughput calculation, 119
menu
change display fonts, 48
change user settings, 48, 93
edit (Comparison window), 94
File, 49
file (Error Log Viewer), 97
file (Main window), 49
help menu, 60
options, 48
options menu, 49
options menu (Error Log Viewer), 98
tools menu, 55
view (Error Log Viewer), 97
window menu, 84
messages, 153
Microsoft Windows 95/98, 13
Microsoft Windows NT, 12
Microsoft WinSock 2, 30
mode names
predefined, 24
modifying application scripts parameters, 35
modifying datagram parameters, 35
N
n/a, 83
NAT, 54, 139
network address, 19
APPC LU name
for IBMs software for Windows NT, 21
for SNA Server, 21
IP, 27
RTP, 27
UDP, 27
Network Address Translation, 54, 139
network applications
connection-oriented vs. connection-less, 33
network node, 26
network node testing, 136
new script, 102
new test, 49
non-secure mode, 136
normal distribution, 101
notebook
options, 49
Novell support, 161
Number of Retransmits before Aborting parameter for
datagrams, 37
O
online help, 4
open a test, 49
open comparison, 93
opening
comparison, 94
operating the console, 47
options menu, 49
options menu (Error Log Viewer), 98
out of resources, 155
out of threads, 155
output template, 62
output templates, 55
adding, 55
copying, 56
defaults, 53
modifying, 55
Index
P
pair
add, 70
edit, 70
paste, 66, 67
PCOMM, 21
Pegasus, 150
Ping, 30
Poisson distribution, 101
Poll endpoints now menu item, 77
polling, 84
polling the endpoints, 82
popups
assertion, 155
port, 54
port number, 134
previous license code, 17
primary messages, 154
print and export options, 62
print options, 93
output templates, 55, 56
output templates, 53
printing
custom options, 64
protection faults, 155
protocol
connection-less, 33
connection-oriented, 33
default, 49
protocols supported by the console, 19
protocols supported by the endpoints, 19
Q
QoS, 28, 57
QoS templates
predefined, 28
Quality of Service, 28, 57
Quality of Service template editor, 58
R
random sleep, 101
raw data totals, 124
Raw Data Totals tab, 87
reading error messages, 153
Readme file
before calling Technical Support, 164
description of known limitations, 157
real time, 79
Real-time Transport Protocol, 40
receive timeout, 39
records
count, 124
registered trademarks, xii
Registration Center, 15
registration number, 15
changing, 54
S
saving
comparison, 93
script, 103
scope, 64
SCR file, 109
script
default, 49
Script Editor
adding a new script, 102
edit
parameters, 101
169
170
Stopping status, 84
streaming
viewing streaming results, 122
stress testing, 144
summary report, 62
T
tabs in the Test window, 61
TCP/IP
choosing a port number, 134
configuration, 27
HOSTS file
for Windows NT, 27
IP address, 27
network address, 27
performance tuning, 133
TCP receive window, 133
testing a connection, 30
Technical support, 164
test automation, 150
test files
default directory, 50
test results
viewing, 117
Test Setup tab, 85
test totals, 118
test window keys help, 91
Test window menu items, 61
Testing through Firewalls, 138
TESTS directory
for Chariot tests, 15
threads, 155
throughput, 119
throughput calculation, 119
Throughput tab, 85
throughput unit default, 52
Throughput units menu item, 72
Time To Live, 39
timing records
printing selected pairs, 64
too many, 142, 148
timing records per pair, 142
tips for testing, 133
toolbar, 61
tools menu, 55
TP name
for endpoints, 26
trademarks, xii
transaction count, 124
transaction rate, 119
transaction rate calculation, 119
Transaction Rate tab, 85
transactions_per_record setting, 144
traps, 155
troubleshooting, 153
TST file, 109
TTL, 39
TXT file, 109
Index
U
UDP
configuration, 27
understanding IP Multicast, 42
understanding multimedia support, 38
understanding reliable datagram delivery, 34
understanding the run status, 83
undo button, 49
unicast
definition of, 38
uniform distribution, 101
uninstall, 16
unmark selected items, 66, 94
unmarking pairs and groups, 66, 94
untitled Test window, 62
upper distribution, 101
USERxx.CMP files, 147
V
validating received data, 81
view error log
location, 98
view menu (Error Log Viewer), 97
viewing the results, 117
W
warnings
changing, 52
Where to read script files, 50
Where to write console error logs, 50
WINAPING, 25
window
Comparison, 92
Test, 61
window menu, 84
Window Size for datagrams, 36
Windows 95
no QoS support, 13
Windows 95/98
console installation, 14
console requirements, 13
Windows 95/98 endpoint
IP network address, 28
Windows 98
QoS support, 13
Windows NT
console installation, 14
console requirements, 12
QoS support in 5.0, 13
Windows NT console IP address, 27
Windows NT endpoint
IP network address, 27
WK3 file, 109
using FMTTST, 112
working with datagrams, 33
working with multimedia support, 33
171