You are on page 1of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

Asnwers to advanced software Testing Interview Questions


Ans 1: Advantages of path coverage metric of software testing: Path coverage requires extremely thorough testing. Disadvantages of path coverage metric of software testing: 1) Since loops introduce an unbounded number of paths, this metric considers only a limited number of looping possibilities. 2) The number of paths is exponential to the number of branches. For example, a function containing 10 if-statements has 1024 paths to test. Adding just one more if-statement doubles the count to 2048. 3) Many paths are impossible to exercise due to relationships of data. Ans 2: Decision coverage has the main advantage of simplicity & is free from many problems of statement coverage. Disadvantage of decision coverage is that this metric ignores branches within Boolean expressions which occur due to short-circuit operators. Ans 3: Drawbacks of statement coverage metric of software testing: 1) It is insensitive to some of the control structures. 2) It does not report whether loops reach their termination condition - only whether the loop body was executed. With C, C++, and Java, this limitation affects loops that contain break statements. 3) It is completely insensitive to the logical operators (|| and &&). 4) It cannot distinguish consecutive switch labels. Ans 4: Advantages of statement coverage metric of software testing 1) The main advantage of statement coverage metric is that it can be applied directly to object code and does not require processing source code. Usually the
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 1 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

performance profilers use this metric. 2) Bugs are evenly distributed through code; therefore the percentage of executable statements covered reflects the percentage of faults discovered. Ans 5: Automation Testing versus Manual Testing Guidelines: I met with my teams automation experts a few weeks back to get their input on when to automate and when to manually test. The general rule of thumb has always been to use common sense. If youre only going to run the test one or two times or the test is really expensive to automation, it is most likely a manual test. But then again, what good is saying use common sense when you need to come up with deterministic set of guidelines on how and when to automate? Pros of Automation If you have to run a set of tests repeatedly, automation is a huge win for you It gives you the ability to run automation against code that frequently changes to catch regressions in a timely manner It gives you the ability to run automation in mainstream scenarios to catch regressions in a timely manner (see What is a Nightly) Aids in testing a large test matrix (different languages on different OS platforms). Automated tests can be run at the same time on different machines, whereas the manual tests would have to be run sequentially. Cons of Automation It costs more to automate. Writing the test cases and writing or configuring the automate framework youre using costs more initially than running the test manually. Cant automate visual references, for example, if you cant tell the font colour via code or the automation tool, it is a manual test. Pros of Manual If the test case only runs twice a coding milestone, it most likely should be a manual test. Less cost than automating it. It allows the tester to perform more ad-hoc (random testing). In my experiences, more bugs are found via ad-hoc than via automation. And, the more time a tester spends playing with the feature, the greater the odds of finding real user bugs. Cons of Manual Running tests manually can be very time consuming Each time there is a new build, the tester must rerun all required tests - which after
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 2 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

a while would become very mundane and tiresome. Other deciding factors: What you automate depends on the tools you use. If the tools have any limitations, those tests are manual. Is the return on investment worth automating? Is what you get out of automation worth the cost of setting up and supporting the test cases, the automation framework, and the system that runs the test cases? Ans: Test Scenario is the user workflow in the application. Example: Checking Mail in Gmail is a scenario, where user login, check the mail in inbox and then logoff. This application can have 2 different test case one for login and other one for inbox. So Test Scenario can consist of different test cases. Ans: The simple answer to the question Can automated testing replace all manual testing is No. Automated functional tests can be used for regression testing (which is a small part of the overall testing effort). If an organization is running the same manual regression tests repeatedly, then the automated tests can replace some of that effort, but they also add the effort to maintain the tests, which is sometimes more than the work required to just run the tests manually. When I say some of the effort, I mean that test failures from an automated test run still must be analyzed manually. Also, any part of the process of provisioning and setting up the machine to run the tests, kicking off the test run, and babysitting it along the way that isnt automated will still require manual attention. Ans: The basic assumptions behind coverage analysis tell us about the strengths and limitations of this testing technique. Some fundamental assumptions are listed below. * Bugs relate to control flow and you can expose Bugs by varying the control flow [Beizer1990 p.60]. For example, a programmer wrote "if (c)" rather than "if (!c)". * You can look for failures without knowing what failures might occur and all tests
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 3 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

are reliable, in that successful test runs imply program correctness [Morell1990]. The tester understands what a correct version of the program would do and can identify differences from the correct behaviour. * Other assumptions include achievable specifications, no errors of omission, and no unreachable code. Clearly, these assumptions do not always hold. Coverage analysis exposes some plausible bugs but does not come close to exposing all classes of bugs. Coverage analysis provides more benefit when applied to an application that makes a lot of decisions rather than data-centric applications, such as a database application. Ans: For a test automation the entry criteria are: * Availability of a stable application under test (around some 80% of test cases passed). * Availability of automation test tool with the required Add-ins and Patches. * Availability of stable and controlled test environment. * Automation test strategy sign-off -scope (type of tests) -functionalities (features to be automated) -assumptions -features to be automated * SIT or UAT sign off * Signed off manual test cases to be provided * Availability of stable test bed. Ans : You are a Lead Automation engineer then what questions you would ask to yourself and your manager while deciding to automate the tests Best approach would be to raise the following questions: 1) Automating this test and running it once will cost more than simply running it manually once. How much more? 2. An automated test has a finite lifetime, during which it must recoup that additional
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 4 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

cost. Is this test likely to die sooner or later? What events are likely to end it? 3. During its lifetime, how likely is this test to find additional bugs (beyond whatever bugs it found the first time it ran)? How does this uncertain benefit balance against the cost of automation? 4. Return of Investment. Ans: Keyword driven testing: This requires the development of data tables and keywords, independent of the test automation tool used to execute them and the test script code that "drives" the application-under-test and the data. Keyword-driven tests look very similar to manual test cases. In a keyword-driven test, the functionality of the application-under-test is documented in a table as well as in step-by-step instructions for each test. In this method, the entire process is data-driven, including functionality The merits of the Keyword Driven Testing are as follows, - The Detail Test Plan can be written in Spreadsheet format containing all input and verification data. - If "utility" scripts can be created by someone proficient in the automated tools Scripting language prior to the Detail Test Plan being written, then the tester can use the Automated Test Tool immediately via the "spreadsheet-input" method, without needing to learn the Scripting language. - The tester need only learn the "Key Words" required, and the specific format to use within the Test Plan. This allows the tester to be productive with the test tool very quickly, and allows more extensive training in the test tool to be scheduled at a more convenient time. Demerits of keyword driven testing The demerits of the Keyword Driven Testing are as follows: - Development of "customized" (Application-Specific) Functions and Utilities requires proficiency in the tools Scripting language. (Note that this is also true for any method) - If application requires more than a few "customized" Utilities, this will require the tester to learn a number of "Key Words" and special formats. This can be time-consuming, and may have an initial impact on Test Plan Development. Once the testers get used to this, however, the time required to produce a test case is greatly improved.
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 5 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

Ans: The architecture of the Test Plan Driven method appears similar to that of the Functional Decomposition method, but in fact, they are substantially different: * Driver Script * Performs initialization, if required; * Calls the Application-Specific Controller Script, passing to it the file-names of the Test Cases (which have been saved from the spreadsheets as a tab-delimited files); * The Controller Script * Reads and processes the file-name received from Driver; * Matches on Key Words contained in the input-file * Builds a parameter-list from the records that follow; * Calls Utility scripts associated with the Key Words, passing the created parameter-list; * Utility Scripts * Process input parameter-list received from the Controller script; * Perform specific tasks (e.g. press a key or button, enter data, verify data, etc.), calling User Defined Functions if required; * Report any errors to a Test Report for the test case; * Return to Controller script; * User Defined Functions * General and Application-Specific functions may be called by any of the above script-types in order to perform specific tasks; Advantages This method has all of the advantages of the Functional Decomposition method, as well as the following: * The Detail Test Plan can be written in Spreadsheet format containing all input and verification data. Therefore the tester only needs to write this once, rather than, for example, writing it in Word, and then creating input and verifihttp://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 6 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

cation files as is required by the Functional Decomposition method. * Test Plan does not necessarily have to be written using MS Excel. Any format can be used from which either tab-delimited or comma-delimited files can be saved (e.g. Access Database, etc.). * If utility scripts can be created by someone proficient in the Automated tools Scripting language prior to the Detail Test Plan being written, then the tester can use the Automated Test Tool immediately via the "spreadsheet-input" method, without needing to learn the Scripting language. The tester need only learn the Key Words required, and the specific format to use within the Test Plan. This allows the tester to be productive with the test tool very quickly, and allows more extensive training in the test tool to be scheduled at a more convenient time. * If Detailed Test Cases already exists in some other format, it is not difficult to translate these into the spreadsheet format. * After a number of generic Utility scripts have already been created for testing an application, we can usually re-use most of these if we need to test another application. This would allow the organization to get their automated testing up and running (for most applications) within a few days, rather than weeks. Disadvantages * Development of customized (Application-Specific) Functions and Utilities requires proficiency in the tools Scripting language. Note that this is also true of the Functional Decomposition method, and, frankly of any method used including Record/Playback. * If application requires more than a few customized Utilities, this will require the tester to learn a number of Key Words and special formats. This can be time-consuming, and may have an initial impact on Test Plan Development. Once the testers get used to this, however, the time required to produce a test case is greatly improved.
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 7 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

produce a test case is greatly improved. Ans: In comparison testing, we compare the old application with new application and see whether the new application is working better than the old application or not. Comparison Testing means comparing your software with the better one or your Competitor. While comparison Testing we basically compare the Performance of the software. For ex If you have to do Comparison Testing of PDF converter (Desktop Based Application) then you will compare your software with your Competitor on the basis of:1.Speed of Conversion PDF file into Word. 2.Quality of converted file. Ans: Parallel Testing Parallel/audit testing is a type of testing where the tester reconciles the output of the new system to the output of the current system, in order to verify the new system operates correctly. OR Comparing our Product / Application build with other products existing in the market. Parallel Testing is also known as Comparative Testing / Competetive Testing. Testing newly developed system and compare the results with already existing system to check any discrepancy between them. Ans: Automated Testing: Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process. Test automation can be expensive, and it is usually employed in combination with manual exploratory testing. It can be made cost-effective in the longer term
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 8 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

manual exploratory testing. It can be made cost-effective in the longer term though, especially in regression testing. One way to generate test cases automatically is model-based testing where a model of the system is used for test case generation, but research continues into a variety of methodologies for doing so. What to automate, when to automate, or even whether one really needs automation are crucial decisions which the testing (or development) team has to take. Selecting the correct features of the product for automation largely decides the success of the automation. Unstable features or the features which are undergoing changes should be avoided. Ans: Data Flow Diagram (DFD) Data Flow Diagram is a graphical representation of the "flow" of data through an information system. A data flow diagram can also be used for the visualization of data processing. It is common practice for a designer to draw a context-level DFD first which shows the interaction between the system and outside entities. The process model is typically used in structured analysis and design methods. Also called a data flow diagram (DFD), it shows the flow of information through a system. Each process transforms inputs into outputs. The model generally starts with a context diagram showing the system as a single process connected to external entities outside of the system boundary. This process explodes to a lower level DFD that divides the system into smaller parts and balances the flow of information between parent and child diagrams. Many diagram levels may be needed to express a complex system. Primitive processes, those that don't explode to a child diagram, are usually described in a connected textual specification. Ans: Traceability Matrix Traceability means that we would like to be able to trace back and forth how and where any work product fulfils the directions of the preceding product. The matrix deals with the where, while the how we have to do our self, once we know the where.

http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html

Page 9 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

A traceability matrix is created by associating requirements with the work products that satisfy them. Tests are associated with the requirements on which they are based and the product tested to meet the requirement. There can be more things included in a traceability matrix than shown. In traceability, the relationship of driver to satisfier can be one-to-one, one-to-many, many-toone, or many-to-many. Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan. Traceability ensures completeness, that all lower level requirements come from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used to manage change and provides the basis for test planning. Ans: Defect leakage Defect leakage refers to the defect Found \ reproduced by the Client or User, which the tester was unable to found. Defect leakage is the number of bugs that are found in the field that were not found internally. There are a few ways to express this: * total number of leaked defects (a simple count) * defects per customer: number of leaked defects divided by number of customers running that release * % found in the field: number of leaked defects divided by number of total defects found in that release In theory, this can be measured at any stage - number of defects leaked from dev into QA, number leaked from QA into beta certification, etc. I've mostly used it for customers in the field, though. Ans: Configuration Management Configuration Management is a discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 10 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

Configuration management (CM) is the detailed recording and updating of information that describes an enterprise's computer systems and networks, including all hardware and software components. Such information typically includes the versions and updates that have been applied to installed software packages and the locations and network addresses of hardware devices. Special configuration management software is available. When a system needs a hardware or software upgrade, a computer technician can accesses the configuration management program and database to see what is currently installed. The technician can then make a more informed decision about the upgrade needed. An advantage of a configuration management application is that the entire collection of systems can be reviewed to make sure any changes made to one system do not adversely affect any of the other systems. Ans: Statement coverage in Software Testing - Has each line of the source code been executed? Statement coverage is one of the ways of measuring code coverage. It describes the degree to which the software code of a program has been tested. All the statements in the code must be executed and tested. Statement coverage means statement wise we need to give proper test cases. For your example we need 3 statement coverages test cases are needed. and 2 branch coverages are needed. Branch coverage will be on true and false for if statements. path coverage = branch coverage +1 Ans: Multiple condition coverage metric of software testing Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. 100% multiple condition coverage implies 100% condition determination coverage. Drawback of this metric is that it becomes tedious to find out the minimum number of test cases required, especially for very complex Boolean expressions. Another drawback of this metric is that the number of test cases required can vary to a large extent among various conditions having similar complexity.
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 11 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

a large extent among various conditions having similar complexity. Ans: Difference between code coverage analysis & test coverage analysis Both these terms are similar. Code coverage analysis is sometimes called test coverage analysis. The academic world generally uses the term "test coverage" whereas the practitioners use the term "code coverage". Ans: Structural testing & functional testing Structural testing examines how the program works, taking into account possible pitfalls in the structure and logic. Functional testing examines what the program accomplishes, without regard to how it works internally. Ans: Probe Testing It is almost same as Exploratory testing. It is a creative, intuitive process. Everything testers do is optimized to find bugs fast, so plans often change as testers learn more about the product and its weaknesses. Session-based test management is one method to organize and direct exploratory testing. It allows us to provide meaningful reports to management while preserving the creativity that makes exploratory testing work. This page includes an explanation of the method as well as sample session reports, and a tool we developed that produces metrics from those reports. Ans: Purpose of Automation Testing Tools The real use and purpose of automated test tools is to automate regression testing. This means that we must have or must develop a database of detailed test cases that are repeatable, and this suite of tests is run every time there is a change to the application to ensure that the change does not produce unintended consequences. Ans: Difference between Retesting and Regression Testing Regression is also retesting, but the objective is different. yes, Regression testing is the testing which is done on every build change. We retest already tested functionality for any new bug introduced due to change. Retesting is the testing of same functionality, this may be due to any bug fix, or due to any change in implementation technology. Ans: Best sequence of coverage goals as implementation strategy
http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html Page 12 of 13

Asnwers to advanced software Testing Interview Questions

31/08/13 6:55 PM

Ans: Best sequence of coverage goals as implementation strategy 1) Invoke at least one function in 90% of the source files (or classes). 2) Invoke 90% of the functions. 3) Attain 90% condition/decision coverage in each function. 4) Attain 100% condition/decision coverage

http://www.softwaretestingtimes.com/2010/04/asnwers-to-advanced-software-testing.html

Page 13 of 13

You might also like