You are on page 1of 7

Under The Hood: The LoadRunner Compiler

by Suresh Nageswaran
Kanbay Inc. 03/01/2005

Compilers and interpreters have always interested me. When I started tinkering with Mercury LoadRunner's implementation, the initial motivation was simply to try an get an understanding of the engine under the hood. The idea was to acquire an edge by going from the documented to the undocumented. This approach bears rich dividends with avenues to extending the tool and using innovative debugging options being thrown open to the seeker. So what's inside the bonnet ? Mercury Interactive (MI) is a company run by rather smart people. Well, they wouldn't be worth close to a billion dollars if they weren't! As with all product companies, they work in a market segment that houses stiff and hardy competitors like Segue Software Silk Performer and IBM Rational Robot. Add to this mix a horde of opensource tools like OpenSTA. Their success depends heavily on time-to-market and their innovative thinking lay in leveraging existing platforms and tools to build their tool on. As a start, in this journey of understanding, it becomes clear that the LR compiler is a native ANSI C implementation. This can be easily verified by looking at \bin\lcc.txt and \bin\gcpp.txt. MI simply took two disparate opensource tools and stuck them together and created an interpreter interface as a front end. The tools I'm talking about are: The GNU C Pre-Processor The LCC Retargetable C Compiler

The tools I used to acquire this re-engineered understanding were rather simple: Dependency Walker (for looking at exports and function signatures) and TextPad (for parsing reg-exps on Win32 platforms).

Dependency Walker 2.0 comes with a rather nifty feature called Application Profiling. This is a technique used to watch a running application to see what modules it loads. Dynamically loaded modules that are not necessarily reported in any on the import tables of other modules can be detected this way.

Figure 1: Dependency Walker Application Profiler CLICK TO ENLARGE We start from our friend MDRV.exe (Multi-threaded Driver Process) who sits smugly in the bin folder. This process does a whole bunch of things including: Invoking the compiler to compile LR code Invoking the interpreter (effectively acting a runtime) Actually running on a generator host Debugging

The figure below shows what MDRV throws up as options when invoked at the command line.

Figure 2: mdrv.exe CLICK TO ENLARGE To understand MDRV from a runtime perspective is to compare it to the Win32 and Java platforms. Here's a simple comparison.

Win32 World Input .c Process Compiler Output .obj .exe Parsed into machine opcode

.obj / .lib Linker .exe / .dll Runtime [Explorer]

Java World

Input Process No Linker .class Runtime [JVM]

Output

.java Javac Compiler .class Coverted into machine opcode

LoadRunner World Input .c Process GNU CPP Output combined_XX.c; pre_cci.c .ci Compiled Image Converted into machine opcode

combined_XX.c; LCC CCI pre_cci.c .usr + .ci Runtime [MDRV]

So now that we have a little more insight into the compilation process, let's walk through the process step by step. We start with a barebones script - one having exactly one action with maybe an lr_output_message() and invoke MDRV on it for only the compilation part. Initially the script has only the following files: vuser_init.c, Action.c, vuser_end.c, default.cfg, default.usp, BareScript.usr . The command-line arguments look like this: mdrv.exe -usr BareScript.usr -compile_only Application profiling on this automatically causes invocation of cpp.exe. The command-line options passed to CPP.EXE are captured by the App Profiler. Here's what it looks like: Program Executable: c:\loadrunner\bin\CPP.EXE Program Arguments: -f"c:\projects\loadrunner\barescript\options.txt" MDRV has created a couple of files prior to invoking the C PreProcessor. The first is combined_BareScript.c - which actually serves the purpose of being a header file. Here's what it has: //combined_BareScript.c #include "lrun.h" #include "vuser_init.c" #include "Action.c" #include "vuser_end.c"

The other file created was pre_cci.c which is a concatenation of all our C source files and the lrun.h header file. The lrun.h header contains declarations to various LR API functions we use - particularly the lr_output_message() - and this external symbol can only be resolved if this is present. The options passed to the pre-processor were defined by MDRV in options.txt. Here's what that file contains: -+ -DCCI -D_IDA_XL -DWINNT -Ic:\projects\loadrunner\barescript -Ic:\loadrunner\include -ec:\projects\loadrunner\barescript\logfile.log c:\projects\loadrunner\barescript\combined_barescript.c c:\projects\loadrunner\barescript\pre_cci.c The GNU preprocessor command line options are documented here. The -D commands are defines. In effect, MDRV is defining CCI, _IDA_XL and WINNT. The WINNT define is presumably the target platform. The include directories seem to indicate the presence of an LR_UNIX definition. As an exercise, we could try that as an option to see if the output is a Unix-compatible image. The -I commands are include directory search paths. The -e command points to a file as an error log. The last two parameters are the files actually being put through the preprocessor. The output of this exercise is still the pre_cci.c file. The next item that got invoked was CCI.exe. Here's what DepWalk found as command-line parameters: Program Executable: c:\loadrunner\bin\CCI.EXE Program Arguments: -errout="c:\projects\loadrunner\barescript\logfile.log" -c "c:\projects\loadrunner\barescript\pre_cci.c" This is straightforward - the -errout is pointing to an error log. The -c is the target to compile - which happens to be our pre_cci.c file. The output of this process is the BareScript.ci file - our final compiled image. But we're not done yet. We still have to figure out how to run this script. For that we simply go to the command line and invoke: mdrv -usr BareScript.usr -file BareScript.ci Now, open up the Task Manager and look at the list of processes. Sure enough, mdrv is now a running process and is executing our vuser script! The controller engine, then, is simply one large 'for' loop, spitting out mdrv instances using the Windows CreateThread() API call. Of course, it is a huge oversimplification,

but it kind of puts things into perspective. In the next article, we shall look at what a Controller run throws up.

Suresh Nageswaran is a senior performance analyst for the Independent Testing Practice at Kanbay Software(KBAY), an Illinois-based company specializing in business solutions for the financial industry. He has spent the past eight years helping financial institutions, ISVs and large enterprises achieve optimal performance from their solutions. He can be reached at sznageswaran-at-kanbay.com or punekarat-yahoo.com

Loadtester Methodology Our company uses Mercury's Business Technology Optimization (BTO) cycle to deliver performance testing services. Our model is based on the illustration below:

How our services and deliverables align with BTO: TASK ASSESS Define Business Goals Organizational Impacts Define Roles Involved with Names Hardware Information Software Information Other Environment Information High Level Milestones High Level Timelines Calculate Expected ROI (Cost savings, process improvements) SCOPE Choose Project Choose Issues to solve Measure Value Provided by Performance Services Define Objective in measurable terms Performance Survey Performance Test Plan Performance Survey Performance Survey

ASSOCIATED DELIVER

Project Plan Project Plan, Performance Plan Infrastructure/Component Diagra Performance Survey Performance Survey Project Plan Project Plan

Increase Scope to other LOB's DESIGN Build High Level Roadmap Create Performance Test Plan Create Key Performance Indicators Define specific business processes Sync Players Assign Resources Communications Create Best Practices Guide or new section for this project Training (official or on the job) Install software (Mercury or Client/Project Application) IMPLEMENT Create Scripts based on Business Processes Create Test Scenarios Create Monitors for the application Execute Load Testing Snapshot (Online Monitors) Baselines (1 business process - 1 Vuser) Bottleneck Identification (20% Test) Benchmark (100% Test) Scalability (120% or higher) Stress (to first breaking point) Volume/Soak (long periods to validate memory release) Provide results to project team Maintain scripts and other files for future use (versioning) Maintain test environment for future use VALIDATE Use results to prove out ROI - KPI Trending over time Document value for project Create High Level Summary of ROI for upper management Scripts Scenario files Scenario files Scenario files Results Files Results Files Results Files Results Files Results Files Results Files Results Files Results Files, Analysis Files

Performance Task List (this docu Performance Test Plan Performance Test Plan, KPI Tre Performance Survey Project Plan Project Plan

Performance Services Best Prac

KPI Trending document Performance Services ROI Exec Performance Services ROI Exec

You might also like