You are on page 1of 7

 

          Using C Vuser Functions

Transaction Functions:

lr_end_sub_transaction Marks the end of a sub-transaction for performance analysis.

lr_end_transaction Marks the end of a transaction.

lr_end_transaction_instance Marks the end of a transaction instance for performance


analysis.

lr_fail_trans_with_error Sets the status of open transactions to LR_FAIL and sends an


error message.

lr_get_trans_instance_duration Gets the duration of a transaction instance specified by


its handle.

lr_get_trans_instance_wasted_time Gets the wasted time of a transaction instance by


its handle.

lr_get_transaction_duration Gets the duration of a transaction by its name.

lr_get_transaction_think_time Gets the think time of a transaction by its name.

lr_get_transaction_wasted_time Gets the wasted time of a transaction

by its name.

lr_resume_transaction Resumes collecting transaction data for performance analysis.

lr_resume_transaction_instance Resumes collecting transaction instance data for


performance analysis.

lr_set_transaction_instance_status Sets the status of a transaction instance.

lr_set_transaction_status Sets the status of open transactions.

lr_set_transaction_status_by_name Sets the status of a transaction.


lr_start_sub_transaction Marks the beginning of a subtransaction.

lr_start_transaction Marks the beginning of a transaction.

lr_start_transaction_instance Starts a nested transaction specified by its parent’s


handle.

lr_stop_transaction Stops the collection of transaction

data.

lr_stop_transaction_instance Stops collecting data for a transaction specified by its


handle.

lr_wasted_time Removes wasted time from all open transactions.

String Functions

lr_eval_string Replaces a parameter with its current value.

lr_save_string Saves a null-terminated string to a parameter.

lr_save_var Saves a variable length string to a parameter.

lr_save_datetime Saves the current date and time to a parameter.

lr _advance_param Advances to the next available parameter.

lr _decrypt Decrypts an encoded string.

lr_eval_string_ext Retrieves a pointer to a buffer containing parameter data.

lr_eval_string_ext_free Frees the pointer allocated by

lr_eval_string_ext.

lr_save_searched_string Searches for an occurrence of string in a buffer and saves a


portion of the buffer, relative to the string occurrence, to a parameter.
 

Message Functions

lr_debug_message Sends a debug message to the Output window or the Business Process


Monitor log files.

lr_error_message Sends an error message to the Output window or the Business Process


Monitor log files.

lr_get_debug_message Retrieves the current message class.

lr_log_message Sends a message to a log file.

lr_output_message Sends a message to the Output window or the Business Process


Monitor log files.

lr_set_debug_message Sets a debug message class.

lr_vuser_status_message Generates and prints formatted output to the Controller or


Console Vuser status area. Not applicable for Application Management tests.

lr_message Sends a message to the Vuser log and Output window or the Business Process
Monitor log files.

Run-Time Functions

lr_load_dll Loads an external DLL.

lr_peek_events Indicates where a Vuser script can be paused.

lr_think_time Pauses script execution to emulate think time—the time a real user pauses
to think between actions.

lr_continue_on_error Specifies an error handling method.


lr_rendezvous Sets a rendezvous point in a Vuser script. Not applicable for Application
Management tests.

Informational Functions

lr_user_data_point Records a user-defined data sample.

lr_whoami Returns information about a Vuser to the Vuser script. Not applicable for
Application Management tests.

lr_get_host_name Returns the name of the host executing the Vuser script.

lr_get_master_host_name Returns the name of the machine running the LoadRunner


Controller or Tuning Console. Not applicable for Application Management tests.

Command Line Parsing Functions

lr_get_attrib_double Retrieves a double type variable used on the script command line.

lr_get_attrib_long Retrieves a long type variable used on the script command line.

lr_get_attrib_string Retrieves a string used on the script command line.


Best Practices for Performance Testing  - Do

 Clear the application and database logs after each performance test run. Excessively
large log files may artificially skew the performance results.
 Identify the correct server software and hardware to mirror your production
environment.
 Use a single graphical user interface (GUI) client to capture end-user response time
while a load is generated on the system. You may need to generate load by using
different client computers, but to make sense of client-side data, such as response time
or requests per second, you should consolidate data at a single client and generate
results based on the average values.
 Include a buffer time between the incremental increases of users during a load test.
 Use different data parameters for each simulated user to create a more realistic load
simulation.
 Monitor all computers involved in the test, including the client that generates the load.
This is important because you should not overly stress the client.
 Prioritize your scenarios according to critical functionality and high-volume transactions.
 Use a zero think time if you need to fire concurrent requests,. This can help you identify
bottleneck issues.
 Stress test critical components of the system to assess their independent thresholds.
Does anyone explain the typical process for Load
testing?

 Step 1: Planning the test.

Here, we develop a clearly defined test plan to ensure the test scenarios we develop will
accomplish load-testing objectives. 

Step 2: Creating Vusers. 


Here, we create Vuser scripts that contain tasks performed by each Vuser, tasks performed by
Vusers as a whole, and tasks measured as transactions. 

Step 3: Creating the scenario. 


A scenario describes the events that occur during a testing session. It includes a list of
machines, scripts, and Vusers that run during the scenario. We create scenarios using
LoadRunner Controller. We can create manual scenarios as well as goal-oriented scenarios. In
manual scenarios, we define the number of Vusers, the load generator machines, and
percentage of Vusers to be assigned to each script. For web tests, we may create a goal-
oriented scenario where we define the goal that our test has to achieve. LoadRunner
automatically builds a scenario for us. 

Step 4: Running the scenario.


We emulate load on the server by instructing multiple Vusers to perform tasks simultaneously.
Before the testing, we set the scenario configuration and scheduling. We can run the entire
scenario, Vuser groups, or individual Vusers. 

Step 5: Monitoring the scenario.


We monitor scenario execution using the LoadRunner online runtime, transaction, system
resource, Web resource, Web server resource, Web application server resource, database
server resource, network delay, streaming media resource, firewall server resource, ERP server
resource, and Java performance monitors. 

Step 6: Analyzing test results. 


During scenario execution, LoadRunner records the performance of the application under
different loads. We use LoadRunner’s graphs and reports to analyze the application’s
performance. 

You might also like