All About Software Testing


1) What is White Box Testing?
Structure-based testing technique is also known as ‘white-box’ or ‘glass-box’ testing technique because here the testers require knowledge of how the software is implemented, how it works. White box testing is a software testing approach, which intends to test software with knowledge of internal working of the software. White box testing approach is used in Unit testing which is usually performed by software developers. White box testing intends to execute code and test statements, branches, path, decisions and data flow within the program being tested. White box testing and Black box testing complement each other as each of the testing approaches have potential to un-cover specific category of errors.

2) What is Black Box Testing?
Specification-based testing technique is also known as ‘black-box’ or input/output driven testing techniques because they view the software as a black-box with inputs and outputs. The testers have no knowledge of how the system or component is structured inside the box. In black-box testing the tester is concentrating on what the software does, not how it does it.
* The definition mentions both functional and non-functional testing. Functional testing is concerned with what the system does its features or functions. Non-functional testing is concerned with examining how well the system does. Non-functional testing like performance, usability, portability, maintainability, etc.
* Specification-based techniques are appropriate at all levels of testing (component testing through to acceptance testing) where a specification exists. For example, when performing system or acceptance testing, the requirements specification or functional specification may form the basis of the tests.
* There are four specification-based or black-box technique:
o Equivalence partitioning
o Boundary value analysis
o Decision tables
o State transition testing

3) What is Grey Box Testing?
A combination of Black Box and White Box testing methodologies: testing a piece of software against its specification but using some knowledge of its internal workings. It can be performed by either development or testing teams. In gray box testing tester applies a limited no of test cases to the internal working of the software under test. In remaining part of gray box testing one takes a black box approach in applying inputs to the software under test and observing the outputs.

4) What is the difference between Validation and Verification?
Verification:
* Verification is to determine the right thing, which involves the testing the implementation of right process. Example: Are we building the product right?
* Verification is Static. This means in Verification the s/w is inspected by looking into the code going line by line or function by function.
* In verification code is reviewed, location of the defect can be found.

Verification Techniques:
There are many different verification techniques but they all basically fall into 2 major categories – dynamic testing and static testing.

* Dynamic testing – Testing that involves the execution of a system or component. Basically, a number of test cases are chosen where each test case consists of test data. These input test cases are used to determine output test results. Dynamic testing can be further divided into three categories – functional testing, structural testing, and random testing.
* Structural testing – Testing that has full knowledge of the implementation of the system and is an example of white-box testing. It uses the information from the internal structure of a system to devise tests to check the operation of individual components. Functional and structural testing both chooses test cases that investigate a particular characteristic of the system.
* Functional testing – Testing that involves identifying and testing all the functions of the system as defined within the requirements. This form of testing is an example of black-box testing since it involves no knowledge of the implementation of the system.
* Random testing – Testing that freely chooses test cases among the set of all possible test cases. The use of randomly determined inputs can detect faults that go undetected by other systematic testing techniques. Exhaustive testing, where the input test cases consists of every possible set of input values, is a form of random testing. Although exhaustive testing performed at every stage in the life cycle results in a complete verification of the system, it is realistically impossible to accomplish.
* Static testing – Testing that does not involve the operation of the system or component. Some of these techniques are performed manually while others are automated. Static testing can be further divided into 2 categories – techniques that analyze consistency and techniques that measure some program property.
* Measurement techniques – Techniques that measure properties such as error proneness, understandability, and well-structured-ness.

Validation: is to perform the things in right direction, like checking the developed software adheres the requirements of the client. Ex: right product was built. Validation is Dynamic. In Validation, code is executed and s/w is run to find defects.In Validation location of the defect can’t be found.

Validation Techniques:
There are also numerous validation techniques, including formal methods, fault injection, and dependability analysis.

* Formal methods – Formal methods is not only a verification technique but also a validation technique. Formal methods mean the use of mathematical and logical techniques to express, investigate, and analyze the specification, design, documentation, and behavior of both hardware and software.
* Fault injection – Fault injection is the intentional activation of faults by either hardware or software means to observe the system operation under fault conditions.
* Software fault injection – Errors are injected into the memory of the computer by software techniques. Software fault injection is basically a simulation of hardware fault injection.
* Risk analysis – Takes hazard analysis further by identifying the possible consequences of each hazard and their probability of occurring.
* Hardware fault injection – Can also be called physical fault injection because we are actually injecting faults into the physical hardware.

Difference between Verification and Validation:
* Verification takes place before validation, and not vice versa.
* Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.
* The inputs of verification are checklists, issues lists, walkthroughs and inspection meetings, reviews and meetings. The input of validation, on the other hand, is the test scenario and test cases which are derived from Software Requirement Specification, Software Design Document and use cases.
* The output of verification is a nearly perfect set of documents, plans, specifications, and requirements document. The output of validation, on the other hand, is a nearly perfect, actual product

5) What is the difference between Alpha and Beta Testing?
Alpha testing is performed by in-house developers and software QA personnel and Testing is done at the end of development.
Beta testing is performed by the public, a few select prospective customers, or the general public. Final testing before releasing application for commercial purpose

6)  What is Usability Testing?
In usability testing basically the testers tests the ease with which the user interfaces can be used. It tests that whether the application or the product built is user-friendly or not.
Usability Testing is a black box testing technique.
Usability testing also reveals whether users feel comfortable with your application or Web site according to different parameters - the flow, navigation and layout, speed and content - especially in comparison to prior or similar applications.
Usability Testing tests the following features of the software.
– How easy it is to use the software.
– How easy it is to learn the software.
– How convenient is the software to end user.

Usability testing includes the following five components:
* Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
* Efficiency: How fast can experienced users accomplish tasks?
* Memorability: When users return to the design after a period of not using it, does the user remember enough to use it effectively the next time, or does the user have to start over again learning everything?
* Errors: How many errors do users make, how severe are these errors and how easily can they recover from the errors?
* Satisfaction: How much does the user like using the system?

Benefits of usability testing to the end user or the customer:
– Better quality software
– Software is easier to use
– Software is more readily accepted by users
– Shortens the learning curve for new users

Advantages of usability testing:
* Usability test can be modified to cover many other types of testing such as functional testing, system integration testing, unit testing, smoke testing etc.
* Usability testing can be very economical if planned properly, yet highly effective and beneficial.
* If proper resources (experienced and creative testers) are used, usability test can help in fixing all the problems that user may face even before the system is finally released to the user. This may result in better performance and a standard system
* Usability testing can help in discovering potential bugs and potholes in the system which generally are not visible to developers and even escape the other type of testing.

Usability testing is a very wide area of testing and it needs fairly high level of understanding of this field along with creative mind. People involved in the usability testing are required to possess skills like patience, ability to listen to the suggestions, openness to welcome any idea, and the most important of them all is that they should have good observation skills to spot and fix the issues or problems.

7) What is Acceptance Testing?
* After the system test has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing.
* Acceptance testing is basically done by the user or customer although other stakeholders may be involved as well.
* The goal of acceptance testing is to establish confidence in the system.
* Acceptance testing is most often focused on a validation type testing.
* Acceptance testing may occur at more than just a single level, for example:
o A Commercial Off the shelf (COTS) software product may be acceptance tested when it is installed or integrated.
o Acceptance testing of the usability of the component may be done during component testing.
o Acceptance testing of a new functional enhancement may come before system testing.

The types of acceptance testing are:
* The User Acceptance test: focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user. The user acceptance test is performed by the users and application managers.
* The Operational Acceptance test: also known as Production acceptance test validates whether the system meets the requirements for operation. In most of the organization the operational acceptance test is performed by the system administration before the system is released. The operational acceptance test may include testing of backup/restore, disaster recovery, maintenance tasks and periodic check of security vulnerabilities.
* Contract Acceptance testing: It is performed against the contract’s acceptance criteria for producing custom developed software. Acceptance should be formally defined when the contract is agreed.
* Compliance acceptance testing: It is also known as regulation acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations.

8)  What is System testing?
* In system testing the behavior of whole system/product is tested as defined by the scope of the development project or product.
* It may include tests based on risks and/or requirement specifications, business process, use cases, or other high level descriptions of system behavior, interactions with the operating systems, and system resources.
* System testing is most often the final test to verify that the system to be delivered meets the specification and its purpose.
* System testing is carried out by specialists testers or independent testers.
* System testing should investigate both functional and non-functional requirements of the testing.

9) What is Integration testing?
* Integration testing tests integration or interfaces between components, interactions to different parts of the system such as an operating system, file system and hardware or interfaces between systems.
* Integration testing is done by a specific integration tester or test team.
* Big Bang integration testing:
o In Big Bang integration testing all components or modules are integrated simultaneously, after which everything is tested as a whole.
o Big Bang testing has the advantage that everything is finished before integration testing starts.
o The major disadvantage is that in general it is time consuming and difficult to trace the cause of failures because of this late integration.
* Incremental testing:
o Another extreme is that all programmers are integrated one by one, and a test is carried out after each step.
o The incremental approach has the advantage that the defects are found early in a smaller assembly when it is relatively easy to detect the cause.
o A disadvantage is that it can be time-consuming since stubs and drivers have to be developed and used in the test.
o Within incremental integration testing  a range of possibilities exist, partly depending on the system architecture:
– Top down: Testing takes place from top to bottom, following the control flow or architectural structure (e.g. starting from the GUI or main menu). Components or systems are substituted by stubs.
– Bottom up: Testing takes place from the bottom of the control flow upwards. Components or systems are substituted by drivers.
– Functional incremental: Integration and testing takes place on the basis of the functions and functionalities, as documented in the functional specification.

10)  What is Unit Testing?
* Unit testing is a method by which individual units of source code are tested to determine if they are fit for use. A unit is the smallest testable part of an application like functions/procedures, classes, interfaces.
* Unit tests are typically written and run by software developers to ensure that code meets its design and behaves as intended.
* The goal of unit testing is to isolate each part of the program and show that the individual parts are correct.
* A unit test provides a strict, written contract that the piece of code must satisfy. As a result, it affords several benefits. Unit tests find problems early in the development cycle.

11) What is the difference between Inspection and Walkthrough?
Inspection:
* It is the most formal review type
* It is led by the trained moderators
* During inspection the documents are prepared and checked thoroughly by the reviewers before the meeting
* It involves peers to examine the product
* A separate preparation is carried out during which the product is examined and the defects are found
* The defects found are documented in a logging list or issue log
* A formal follow-up is carried out by the moderator applying exit criteria

The goals of inspection are:
i. It helps the author to improve the quality of the document under inspection
ii. It removes defects efficiently and as early as possible
iii. It improve product quality
iv. It create common understanding by exchanging information
v. It learn from defects found and prevent the occurrence of similar defects

Walkthrough:
* It is not a formal process/review
* It is led by the authors
* Author guide the participants through the document according to his or her thought process to achieve a common understanding and to gather feedback.
* Useful for the people if they are not from the software discipline, who are not used to or cannot easily understand software development process.
* Is especially useful for higher level documents like requirement specification, etc.

The goals of a walkthrough:
i. To present the documents both within and outside the software discipline in order to gather the information regarding the topic under documentation.
ii. To explain or do the knowledge transfer and evaluate the contents of the document
iii. To achieve a common understanding and to gather feedback.
iv. To examine and discuss the validity of the proposed solutions

12) What are Boundary Value Analysis and Equivalence Partitioning?

* Boundary value analysis (BVA) is based on testing at the boundaries between partitions.
* Here we have both valid boundaries (in the valid partitions) and invalid boundaries (in the invalid partitions).
* As an example, consider a printer that has an input option of the number of copies to be made, from 1 to 99. To apply boundary value analysis, we will take the minimum and maximum (boundary) values from the valid partition (1 and 99 in this case) together with the first or last value respectively in each of the invalid partitions adjacent to the valid partition (0 and 100 in this case). In this example we would have three equivalence partitioning tests (one from each of the three partitions) and four boundary value tests. Consider the bank system described in the previous section in equivalence partitioning.

* Equivalence partitioning (EP) is a specification-based or black-box technique.
* It can be applied at any level of testing and is often a good technique to use first.
* The idea behind this technique is to divide (i.e. to partition) a set of test conditions into groups or sets that can be considered the same (i.e. the system should handle them equivalently), hence ‘equivalence partitioning’. Equivalence partitions are also known as equivalence classes – the two terms mean exactly the same thing.
* In equivalence-partitioning technique we need to test only one condition from each partition. This is because we are assuming that all the conditions in one partition will be treated in the same way by the software. If one condition in a partition works, we assume all of the conditions in that partition will work, and so there is little point in testing any of these others. Similarly, if one of the conditions in a partition does not work, then we assume that none of the conditions in that partition will work so again there is little point in testing any more in that partition

The main difference between EP and BVA is that EP determines the number of test cases to be generated for a given scenario where as BVA will determine the effectiveness of those generated test cases.

13) While testing why it is important to do both equivalence partitioning and boundary value analysis?
Technically, because every boundary is in some partition, if you did only boundary value analysis you would also have tested every equivalence partition. However, this approach may cause problems if that value fails – was it only the boundary value that failed or did the whole partition fail? Also by testing only boundaries we would probably not give the users much confidence as we are using extreme values rather than normal values. The boundaries may be more difficult (and therefore more costly) to set up as well.  For example, in the printer copies example described earlier we identified the following boundary values:

Suppose we test only the valid boundary values 1 and 99 and nothing in between. If both tests pass, this seems to indicate that all the values in between should also work. However, suppose that one page prints correctly, but 99 pages do not. Now we don’t know whether any set of more than one page works, so the first thing we would do would be to test for say 10 pages, i.e. a value from the equivalence partition. We recommend that you test the partitions separately from boundaries – this means choosing partition values that are NOT boundary values. However, if you use the three-value boundary value approach, then you would have valid boundary values of 1, 2, 98 and 99, so having a separate equivalence value in addition to the extra two boundary values would not give much additional benefit. But notice that one equivalence value, e.g. 10, replaces both of the extra two boundary values (2 and 98). This is why equivalence partitioning with two-value boundary value analysis is more efficient than three-value boundary value analysis.

14)  What is Random testing and Monkey Testing?
Random testing is sometimes called as monkey testing. In Random testing data is generated randomly often using tool. For instance below figure shows how randomly generated data is sent to the system. This data is generated either using tool or some automated mechanism. With this randomly generated input the system is then tested and results are observed accordingly.
Random test has the following weakness:-
· They are not realistic.
· Many of the test are redudandant and unrealistic.
· You will be spending more time in analyzing results.
· We cannot recreate the test if we do not record what data was used for testing.
This kind of testing is really of no use and is normally performed by new comers. The best use it is to see if the system will maintain under adverse affect. Who knows we can be lucky to get some defects using the same....but that’s luck and testing does not really work on luck, but works rather on planning.


15) What is Exploratory testing?
* As its name implies, exploratory testing is about exploring, finding out about the software, what it does, what it doesn’t do, what works and what doesn’t work. The tester is constantly making decisions about what to test next and where to spend the (limited) time. This is an approach that is most useful when there are no or poor specifications and when time is severely limited.
* Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution.
* The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used.
* The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to us boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.
* Test logging is undertaken as test execution is performed, documenting the key aspects of what is tested, any defects found and any thoughts about possible further testing.
* It can also serve to complement other, more formal testing, helping to establish greater confidence in the software. In this way, exploratory testing can be used as a check on the formal test process by helping to ensure that the most serious defects have been found.


16) What is Endurance testing or Soak Testing?
* It is a type of non-functional testing.
* It is also known as Longevity testing.
* Endurance testing involves testing a system with a significant load extended over a significant period of time, to discover how the system behaves under sustained use. For example, in software testing, a system may behave exactly as expected when tested for 1 hour but when the same system is tested for 3 hours, problems such as memory leaks cause the system to fail or behave randomly.
* The goal is to discover how the system behaves under sustained use. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good as or better than at the beginning of the test.
* It is basically used to check the memory leaks.


17)  What is the difference between Performance, Load and Stress testing?
Performance, Load and Stress testing are type of non-functional testing.

Performance Testing:
* Performance testing is testing that is performed, to determine how fast some aspect of a system performs under a particular workload.
* It can serve different purposes like it can demonstrate that the system meets performance criteria.
* It can compare two systems to find which performs better. Or it can measure what part of the system or workload causes the system to perform badly.
* This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions.

Load testing:
* A load test is type of software testing which is conducted to understand the behavior of the application under a specific expected load.
* Load testing is performed to determine a system’s behavior under both normal and at peak conditions.
* It helps to identify the maximum operating capacity of an application as well as any bottlenecks and determine which element is causing degradation. E.g. If the number of users are increased then how much CPU, memory will be consumed, what is the network and bandwidth response time.
* Load testing can be done under controlled lab conditions to compare the capabilities of different systems or to accurately measure the capabilities of a single system.
* Load testing involves simulating real-life user load for the target application. It helps you determine how your application behaves when multiple users hits it simultaneously.
* Load testing differs from stress testing, which evaluates the extent to which a system keeps working when subjected to extreme work loads or when some of its hardware or software has been compromised.
* The primary goal of load testing is to define the maximum amount of work a system can handle without significant performance degradation.
* Examples of load testing include:
* Downloading a series of large files from the internet.
* Running multiple applications on a computer or server simultaneously.
* Assigning many jobs to a printer in a queue.
* Subjecting a server to a large amount of traffic.
* Writing and reading data to and from a hard disk continuously.

Stress Testing:
* Stress testing involves testing beyond normal operational capacity, often to a breaking point, in order to observe the results.
* It is a form of software testing that is used to determine the stability of a given system.
* It put greater emphasis on robustness, availability, recoverability and error handling under a heavy load, rather than on what would be considered correct behavior under normal circumstances.
* The goals of such tests may be to ensure the software does not crash in conditions of insufficient computational resources (such as memory or disk space).

18)  What is Internationalization testing and localization testing?
* It is a type of non-functional testing.
* Internationalization is a process of designing a software application so that it can be adapted to various languages and regions without any changes.
* Whereas Localization is a process of adapting internationalized software for a specific region or language by adding local specific components and translating text.

19)  What is Security Testing?
* It is a type of non-functional testing.
* Security testing is basically a type of software testing that’s done to check whether the application or the product is secured or not. It checks to see if the application is vulnerable to attacks, if anyone hack the system or login to the application without any authorization.
* It is a process to determine that an information system protects data and maintains functionality as intended.
* The security testing is performed to check whether there is any information leakage in the sense by encrypting the application or using wide range of software’s and hardware’s and firewall etc.
* Software security is about making software behave in the presence of a malicious attack.
* The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.

20) What is the difference between Regression testing and Re-testing?
Confirmation testing or re-testing: When a test fails because of the defect then that defect is reported and a new version of the software is expected that has had the defect fixed. In this case we need to execute the test again to confirm that whether the defect got actually fixed or not. This is known as confirmation testing and also known as re-testing. It is important to ensure that the test is executed in exactly the same way it was the first time using the same inputs, data and environments.
Hence, when the change is made to the defect in order to fix it then confirmation testing or re-testing is helpful.

Regression testing: During confirmation testing the defect got fixed and that part of the application started working as intended. But there might be a possibility that the fix may have introduced or uncovered a different defect elsewhere in the software. The way to detect these ‘unexpected side-effects’ of fixes is to do regression testing. The purpose of a regression testing is to verify that modifications in the software or the environment have not caused any unintended adverse side effects and that the system still meets its requirements. Regression testing are mostly automated because in order to fix the defect the same test is carried out again and again and it will be very tedious to do it manually. Regression tests are executed whenever the software changes, either as a result of fixes or new or changed functionality.

21) What are Smoke and Sanity Testing and what is the difference?
Smoke Testing:
Smoke Testing is performed after software build to ascertain that the critical functionalities of the program is working fine. It is executed "before" any detailed functional or regression tests are executed on the software build. The purpose is to reject a badly broken application, so that the QA team does not waste time installing and testing the software application.
In Smoke Testing, the test cases chosen cover the most important functionality or component of the system. The objective is not to perform exhaustive testing, but to verify that the critical functionalities of the system is working fine.
For Example a typical smoke test would be – Verify that the application launches successfully, Check that the GUI is responsive ... etc.

Sanity testing:
After receiving a software build, with minor changes in code, or functionality, Sanity testing is performed to ascertain that the bugs have been fixed and no further issues are introduced due to these changes. The goal is to determine that the proposed functionality works roughly as expected. If sanity test fails, the build is rejected to save the time and costs involved in a more rigorous testing.
The objective is "not" to verify thoroughly the new functionality, but to determine that the developer has applied some rationality (sanity) while producing the software. For instance, if your scientific calculator gives the result of 2 + 2 =5! Then, there is no point testing the advanced functionalities like sin 30 + cos 50.


Difference between them:
Smoke Testing is performed to ascertain that the critical functionalities of the program is working fine
Sanity Testing is done to check the new functionality / bugs have been fixed
The objective of this testing is to verify the "stability" of the system in order to proceed with more rigorous testing
The objective of the testing is to verify the "rationality" of the system in order to proceed with more rigorous testing
This testing is performed by the developers or testers
Sanity testing is usually performed by testers
Smoke testing is usually documented or scripted
Sanity testing is usually not documented and is unscripted
Smoke testing is a subset of Regression testing
Sanity testing is a subset of Acceptance testing
Smoke testing exercises the entire system from end to end
Sanity testing exercises only the particular component of the entire system
Smoke testing is like General Health Check Up
Sanity Testing is like specialized health check up


22) Define Test Case.
"A test case is a set of inputs, execution conditions and expected results for a particular test objective. A test case is the smallest entity that is always executed as a unit, from beginning to end."

23)  What are the Black Box testing techniques?
Boundary Value testing, Equivalence testing, and Error Guessing

24)  What is Web Testing and what are the different testing can be done for web testing?
Web testing are essentially testing of client/server applications - Web servers/Application Server and thin client (Browser).
* Functionality of applications that run in web pages (such as applets, JavaScript, plug-in applications), Validation of Internal and External links, Searches, pop-up windows.
* Usability testing.
* UI Testing- Images, text, frames, Page Layout & design, Page content
* Performance testing
* Load Testing
* Stress Testing
* Security Testing (Firewall, Database)
* Internationalization Testing.
* Compatibility testing - Different Browser.

25)  What are Functional and Non Functional Testing?
In functional testing basically the testing of the functions of component or system is done. It refers to activities that verify a specific action or function of the code. Functional test tends to answer the questions like “can the user do this” or “does this particular feature work”. This is typically described in a requirements specification or in a functional specification.

The techniques used for functional testing are often specification-based. Testing functionality can be done from two perspectives:
* Requirement-based testing: In this type of testing the requirements are prioritized depending on the risk criteria and accordingly the tests are prioritized. This will ensure that the most important and most critical tests are included in the testing effort.
* Business-process-based testing: In this type of testing the scenarios involved in the day-to-day business use of the system are described. It uses the knowledge of the business processes. For example, a personal and payroll system may have the business process along the lines of: someone joins the company, employee is paid on the regular basis and employee finally leaves the company.

In non-functional testing the quality characteristics of the component or system is tested. Non-functional refers to aspects of the software that may not be related to a specific function or user action such as scalability or security. Eg. How many people can log in at once? Non-functional testing is also performed at all levels like functional testing.

26) What is the difference between structural and functional testing?
* The structural testing is the testing of the structure of the system or component.
* Structural testing is often referred to as ‘white box’ or ‘glass box’ or ‘clear-box testing’ because in structural testing we are interested in what is happening ‘inside the system/application’.
* In structural testing the testers are required to have the knowledge of the internal implementations of the code. Here the testers require knowledge of how the software is implemented, how it works.
* During structural testing the tester is concentrating on how the software does it. For example, a structural technique wants to know how loops in the software are working. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software.
* Structural testing can be used at all levels of testing. Developers use structural testing in component testing and component integration testing, especially where there is good tool support for code coverage. Structural testing is also used in system and acceptance testing, but the structures are different. For example, the coverage of menu options or major business transactions could be the structural element in system or acceptance testing.

* In functional testing basically the testing of the functions of component or system is done. It refers to activities that verify a specific action or function of the code. Functional test tends to answer the questions like “can the user do this” or “does this particular feature work”. This is typically described in a requirements specification or in a functional specification.

* The techniques used for functional testing are often specification-based. Testing functionality can be done from two perspective:

o Requirement-based testing: In this type of testing the requirements are prioritized depending on the risk criteria and accordingly the tests are prioritized. This will ensure that the most important and most critical tests are included in the testing effort.
o Business-process-based testing: In this type of testing the scenarios involved in the day-to-day business use of the system are described. It uses the knowledge of the business processes. For example, a personal and payroll system may have the business process along the lines of: someone joins the company, employee is paid on the regular basis and employee finally leaves the company.

27) What is Ad-hoc testing?
Ad-hoc testing is concern with the Application Testing without following any rules or test cases. For Ad hoc testing one should have strong knowledge about the Application.

28) What are “STUB” and “DRIVERS”?
In integration testing smaller units are integrated into larger units and larger units into the overall system. This differs from unit testing in that units are no longer tested independently but in groups, the focus shifting from the individual units to the interaction between them.


At this point “stubs” and “drivers” take over from test harnesses. A stub is a simulation of a particular sub-unit which can be used to simulate that unit in a larger assembly. For example if units A, B and C constitute the major parts of unit D then the overall assembly could be tested by assembling units A and B and a simulation of C, if C were not complete. Similarly if unit D itself was not complete it could be represented by a “driver” or a simulation of the super-unit.

29) What are the different types of verification?
Verification is static type of s/w testing. It means code is not executed. The product is evaluated by going through the code. Types of verification are:
(a) Walkthrough  (b)  Inspection  (c) Reviews

Walkthroughs are informal, initiated by the author of the s/w product to a colleague for assistance in locating defects or suggestions for improvements. They are usually unplanned. Author explains the product; colleague comes out with observations and author notes down relevant points and takes corrective actions.

Inspection is a thorough word-by-word checking of a software product with the intention of:
* Locating defects
* Confirming traceability of relevant requirements
* Checking for conformance to relevant standards and conventions
* Inspections are more formal than walkthroughs. It involves 5 major roles:
o Author: person who originally created the work product.
o Moderator: Person responsible to ensure the discussions proceed on the productive lines.
o Reader: Person responsible for reading aloud small logical units of the work product
o Recorder: Person who records/documents all the defects that arise from the inspection team.
o Inspector: All of the inspection team members who analyze and detect the defects within the work product.

Review is a subsequent examination of a product for the purpose of monitoring earlier changes. It is a process in which one or more persons check the changed documents or data to determine if the changes are correct. It is also an analysis undertaken at a fixed point in time to determine the degree to which stated objectives have been reached.

30) Explain Fuzz testing?
Fuzz testing or fuzzing is a software testing technique, often automated or semi-automated, that involves providing invalid, unexpected, or random data to the inputs of a computer program. The program is then monitored for exceptions such as crashes, or failing built-in code assertions or for finding potential memory leaks. Fuzzing is commonly used to test for security problems in software or computer systems.

31) What is Compatibility testing
* It is a type of non-functional testing.
* Compatibility testing is a type of software testing used to ensure compatibility of the system/application/website built with various other objects such as other web browsers, hardware platforms, users (in case if it’s very specific type of requirement, such as a user who speaks and can read only a particular language), operating systems etc. This type of testing helps find out how well a system performs in a particular environment that includes hardware, network, operating system and other software etc.
* It is basically the testing of the application or the product built with the computing environment.
* It tests whether the application or the software product built is compatible with the hardware, operating system, database or other system software or not.

32) What is Test Plan? And what are the different documents required for preparing it?
The Test Plan document on the other hand is derived from the Product Description, Software Requirement Specification SRS, or Use Case Documents.
The Test Plan document is usually prepared by the Test Lead or Test Manager and the focus of the document is to describe what to test, how to test, when to test and who will do what test.
It is not uncommon to have one Master Test Plan which is a common document for the test phases and each test phase has its own Test Plan document.
There is much debate, as to whether the Test Plan document should also be a static document like the Test Strategy document mentioned above or should it be updated every often to reflect changes according to the direction of the project and activities.
My own personal view is that when a testing phase starts and the Test Manager is “controlling” the activities, the test plan should be updated to reflect any deviation from the original plan. After all, Planning and Control are continuous activities in the formal test process.
Components of the Test Plan document
* Test Plan id
* Introduction
* Test items
* Features to be tested
* Features not to be tested
* Test techniques
* Testing tasks
* Suspension criteria
* Features pass or fail criteria
* Test environment (Entry criteria, Exit criteria)
* Test deliverables
* Staff and training needs
* Responsibilities
* Schedule
This is a standard approach to prepare test plan and test strategy documents, but things can vary company-to-company

33) What is Test Strategy? And contents of the same.
A Test Strategy document is a high level document and normally developed by project manager. This document defines “Testing Approach” to achieve testing objectives. The Test Strategy is normally derived from the Business Requirement Specification document.

The Test Strategy document is a static document meaning that it is not updated too often. It sets the standards for testing processes and activities and other documents such as the Test Plan draws its contents from those standards set in the Test Strategy Document.

Some companies include the “Test Approach” or “Strategy” inside the Test Plan, which is fine and it is usually the case for small projects. However, for larger projects there is one Test Strategy document and different number of Test Plans for each phase or level of testing.

Components of the Test Strategy document
•    Scope and Objectives
•    Business issues
•    Roles and responsibilities
•    Communication and status reporting
•    Test deliverability
•    Industry standards to follow
•    Test automation and tools
•    Testing measurements and metrics
•    Risks and mitigation
•    Defect reporting and tracking
•    Change and configuration management
•    Training plan

34) What is Traceability Metrics Document?
Traceability Matrix is a document used for tracking the requirement, Test cases and the defect. This document is prepared to make the Higher Mgmt and clients satisfy that the test coverage done is complete and end to end, this document consists of Requirement/Base line doc Ref No., Test case/Condition, and Defects/Bug id. Using this document the person can track the Requirement based on the Defect id.

Test conditions should be able to be linked back to their sources in the test basis, this is known as traceability. Traceability can be horizontal through all the test documentation for a given test level (e.g. system testing, from test conditions through test cases to test scripts) or it can be vertical through the layers of development documentation (e.g. from requirements to components).

Now, the question may arise is that Why is traceability important?  So, let’s have a look on the following examples:

    The requirements for a given function or feature have changed. Some of the fields now have different ranges that can be entered. Which tests were looking at those boundaries? They now need to be changed. How many tests will actually be affected by this change in the requirements? These questions can be answered easily if the requirements can easily be traced to the tests.

    A set of tests that has run OK in the past has now started creating serious problems. What functionality do these tests actually exercise? Traceability between the tests and the requirement being tested enables the functions or features affected to be identified more easily.

Before delivering a new release, we want to know whether or not we have tested all of the specified requirements in the requirements specification. We have the list of the tests that have passed – was every requirement tested?

35) What is Test Metrics? What are the different test metrics available?
When we can measure what we are speaking about and express it in numbers, we know something about it; but when we cannot measure, when we cannot express it in numbers, our knowledge is of a meager and unsatisfactory kind it may be the beginning of knowledge but we have scarcely in your thoughts advanced to the stage of science.
Why we need Metrics?
“We cannot improve what we cannot measure.”
“We cannot control what we cannot measure.”

AND TEST METRICS HELPS IN
Take decision for next phase of activities
Evidence of the claim or prediction
Understand the type of improvement required
Take decision on process or technology change

II) Type of metrics
Base Metrics (Direct Measure)
Base metrics constitute the raw data gathered by a Test Analyst throughout the testing effort. These metrics are used to provide project status reports to the Test Lead and Project Manager; they also feed into the formulas used to derive Calculated Metrics.
Ex: # of Test Cases, # of Test Cases Executed

Calculated Metrics (Indirect Measure)
Calculated Metrics convert the Base Metrics data into more useful information. These types of metrics are generally the responsibility of the Test Lead and can be tracked at many different levels (by module, tester, or project).
Ex: % Complete, % Test Coverage

Base Metrics & Test Phases
• # of Test Cases (Test Development Phase)
• # of Test Cases Executed (Test Execution Phase)
• # of Test Cases Passed (Test Execution Phase)
• # of Test Cases Failed (Test Execution Phase)
• # of Test Cases under Investigation (Test Development Phase)
• # of Test Cases Blocked (Test dev/execution Phase)
• # of Test Cases Re-executed (Regression Phase)
• # of First Run Failures (Test Execution Phase)
• Total Executions (Test Reporting Phase)
• Total Passes (Test Reporting Phase)
• Total Failures (Test Reporting Phase)
• Test Case Execution Time ((Test Reporting Phase)
• Test Execution Time (Test Reporting Phase)

Calculated Metrics & Phases
The below metrics are created at Test Reporting Phase or Post test Analysis phase
• % Complete
• % Defects Corrected
• % Test Coverage
• % Rework
• % Test Cases Passed
• % Test Effectiveness
• % Test Cases Blocked
• % Test Efficiency
• 1st Run Fail Rate
• Defect Discovery Rate
• Overall Fail Rate

III) Crucial Web Based Testing Metrics
Test Plan coverage on Functionality
Total number of requirement v/s number of requirements covered through test scripts.
• (No of requirements covered / total number of requirements) * 100
Define requirements at the time of Effort estimation
Example: Total numbers of requirements estimated are 46, total number of requirements tested 39; blocked 7…define what the coverage is?
Note: Define requirement clearly at project level

Test Case defect density
Total number of errors found in test scripts v/s developed and executed.
• (Defective Test Scripts /Total Test Scripts) * 100
Example: Total test script developed 1360, total test script executed 1280, total test script passed 1065, and total test script failed 215 so test case defect density is 215 X 100 = 16.8%
1280, this 16.8% value can also be called as test case efficiency %, which is depends upon total number of test cases which uncovered defects

Defect Slippage Ratio
Number of defects slipped (reported from production) v/s number of defects reported during execution.
• Number of Defects Slipped / (Number of Defects Raised - Number of Defects Withdrawn)
Example: Customer filed defects are 21, total defect found while testing are 267, total number of invalid defects are 17 So, Slippage Ratio is [21/ (267-17)] X 100 = 8.4%

Requirement Volatility
Number of requirements agreed v/s number of requirements changed.
• (Number of Requirements Added + Deleted + Modified) *100 / Number of Original Requirements
• Ensure that the requirements are normalized or defined properly while estimating
Example: VSS 1.3 release had total 67 requirements initially, later they added another 7 new requirements and removed 3 from initial requirements and modified 11 requirements So, requirement Volatility is (7 + 3 + 11) * 100/67 = 31.34%
Means almost 1/3 of the requirement changed after initial identification

Review Efficiency
The Review Efficiency is a metric that offers insight on the review quality and testing
Some organization also use this term as “Static Testing” efficiency and they are aiming to get min of 30% defects in static testing
Review efficiency=100*Total number of defects found by reviews/Total number of project defects
Example: A project found total 269 defects in different reviews, which were fixed and test team got 476 defects which were reported and valid
So, Review efficiency is [269/(269+476)] X 100 = 36.1%

Efficiency and Effectiveness of Processes
• Effectiveness: Doing the right thing. It deals with meeting the desirable attributes that are expected by the customer.
• Efficiency: Doing the thing right. It concerns the resources used for the service to be rendered

Metrics for Software Testing
• Defect Removal Effectiveness
DRE= (Defects removed during development phase x100%) / Defects latent in the product
Defects latent in the product = Defects removed during development
Phase+ defects found later by user
• Efficiency of Testing Process (define size in KLoC or FP, Req.)
Testing Efficiency= Size of Software Tested /Resources used

36)  What are the different techniques of Test Estimation?
In order to successful software project & proper execution of task, the Estimation Techniques plays vital role in software development life cycle. The technique which is used to calculate the time required to accomplish a particular task is called Estimation Techniques. To estimate a task different effective Software Estimation Techniques can be used to get the better estimation.

Before moving forward let’s ask some basic questions like What is use of this? or Why this is needed? or Who will do this? So in this article I am discussing all your queries regarding ESTIMATION.
What is Estimation?

“Estimation is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.” [Wiki Definition]

The Estimate is prediction or a rough idea to determine how much effort would take to complete a defined task. Here the effort could be time or cost. An estimate is a forecast or prediction and approximate of what it would Cost. A rough idea how long a task would take to complete. An estimate is especially an approximate computation of the probable cost of a piece of work.

The calculation of test estimation techniques is based on:
* Past Data/Past experience
* Available documents/Knowledge
* Assumptions
* Calculated risks

Before starting one common question arises in the testers mind is that “Why do we estimate?” The answer to this question is pretty simple, it is to avoid the exceeding timescales and overshooting budgets for testing activities we estimate the task.

Few points need to be considered before estimating testing activities:
* Check if all requirements are finalize or not.
* If it not then how frequently they are going to be changed.
* All responsibilities and dependencies are clear.
* Check if required infrastructure is ready for testing or not.
* Check if before estimating task is all assumptions and risks are documented.

There are different Software Testing Estimation Techniques which can be used for estimating a task.
1) Delphi Technique
2) Work Breakdown Structure (WBS)
3) Three Point Estimation
4) Functional Point Method

1) Delphi Technique:
Delphi technique – This is one of the widely used software testing estimation technique.
In the Delphi Method is based on surveys and basically collects the information from participants who are experts. In this estimation technique each task is assigned to each team member & over multiple rounds surveys are conduct unless & until a final estimation of task is not finalized. In each round the thought about task are gathered & feedback is provided. By using this method, you can get quantitative and qualitative results.

In overall techniques this technique gives good confidence in the estimation. This technique can be used with the combination of the other techniques.

2) Work Breakdown Structure (WBS):      
A big project is made manageable by first breaking it down into individual components in a hierarchical structure, known as the Work breakdown structure, or the WBS.
The WBS helps to project manager and the team to create the task scheduling, detailed cost estimation of the project. By using the WBS motions, the project manager and team will have a pretty good idea whether or not they’ve captured all the necessary tasks, based on the project requirements, which are going to need to happen to get the job done.
In this technique the complex project is divided into smaller pieces. The modules are divided into smaller sub-modules. Each sub-module is further divided into functionality. And each functionality can be divided into sub-functionalities. After breakdown the work all functionality should review to check whether each & every functionality is covered in the WBS.

Using this you can easily figure out what all task needs to completed & they are breakdown into details task so estimation to details task would be easier than estimating overall Complex project at one shot.

Work Breakdown Structure has four key benefits:
* Work Breakdown Structure forces the team to create detailed steps:
* In The WBS all steps required to build or deliver the service are divided into detailed task by Project manager, Team and customer. It helps to raise the critical issues early on, narrow down the scope of the project and create a dialogue which will help make clear bring out assumptions, ambiguities, narrow the scope of the project, and raise critical issues early on.
* Work Breakdown Structure help to improve the schedule and budget.
* WBS enables you to make an effective schedule and good budget plans. As all tasks are already available so it helps in generating a meaningful schedule and makes scheming a reliable budget easier.
* Work Breakdown Structure creates accountability
* The level of details task breakdown helps to assign particular module task to individual, which makes easier to hold person accountable to complete the task. Also the detailed task in WBS, people cannot allow hiding under the “cover of broadness.”
* Work Breakdown Structure creation breeds commitment
* The process of developing and completing a WBS breed excitement and commitment. Although the project manager will often develop the high-level WBS, he will seek the participation of his core team to flesh out the extreme detail of the WBS. This participation will spark involvement in the project.

3) Three Point Estimation:
Three point estimation is the estimation method is based on statistical data. It is very much similar to WBS technique, tasks are broken down into subtasks & three types of estimation are done on this sub pieces.
Optimistic Estimate (Best case scenario in which nothing goes wrong and all conditions are optimal.) = A
Most Likely Estimate (most likely duration and there may be some problem but most of the things will go right.) = M
Pessimistic Estimate (worst case scenario which everything goes wrong.) = B

Formula to find Value for Estimate (E) = A + (4*M) + B / 6
Standard Deviation (SD) = = (B – A)/6
Now days, planning poker and Delphi estimates are most popular testing test estimation techniques.

4) Functional Point Method:
Functional Point is measured from a functional, or user, point of view.
It is independent of computer language, capability, and technology or development methodology of the team. It is based on available documents like SRS, Design etc.
In this FP technique we have to give weightage to each functional point. Prior to start actual estimating tasks functional points are divided into three groups like Complex, Medium & Simple. Based on similar projects & Organization standards we have to define estimate per function points.

Total Effort Estimate = Total Function Points * Estimate defined per Functional Point

Advantages of the Functional Point Method:
* In pre-project stage the estimates can be prepared.
* Based on requirement specification documents the method’s reliability is relatively high.

Disadvantages of Software Estimation Techniques:
* Due to hidden factors can be over or under estimated
* Not really accurate
* It is based on thinking
* Involved Risk
* May give false result
* Bare to losing
* Sometimes cannot trust in estimate

37) Explain the concept of Decision Tables
The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs. However, if different combinations of inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based software testing techniques, decision tables and state transition testing are more focused on business logic or business rules.

A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is sometimes also referred to as a ’cause-effect’ table. The reason for this is that there is an associated logic diagramming technique called ’cause-effect graphing’ which was sometimes used to help derive the decision table (Myers describes this as a combinatorial logic network [Myers, 1979]). However, most people find it more useful just to use the table described in [Copeland, 2003].

* Decision tables provide a systematic way of stating complex business rules, which is useful for developers as well as for testers.
* Decision tables can be used in test design whether or not they are used in specifications, as they help testers explore the effects of combinations of different inputs and other software states that must correctly implement business rules.
* It helps the developers to do a better job can also lead to better relationships with them. Testing combinations can be a challenge, as the number of combinations can often be huge. Testing all combinations may be impractical if not impossible. We have to be satisfied with testing just a small subset of combinations but making the choice of which combinations to test and which to leave out is also important. If you do not have a systematic way of selecting combinations, an arbitrary subset will be used and this may well result in an ineffective test effort.

How to Use decision tables for test designing?
The first task is to identify a suitable function or subsystem which reacts according to a combination of inputs or events. The system should not contain too many inputs otherwise the number of combinations will become unmanageable. It is better to deal with large numbers of conditions by dividing them into subsets and dealing with the subsets one at a time. Once you have identified the aspects that need to be combined, then you put them into a table listing all the combinations of True and False for each of the aspects.

Credit card example:
Let’s take another example. If you are a new customer and you want to open a credit card account then there are three conditions first you will get a 15% discount on all your purchases today, second if you are an existing customer and you hold a loyalty card, you get a 10% discount and third if you have a coupon, you can get 20% off today (but it can’t be used with the ‘new customer’ discount). Discount amounts are added, if applicable.


The conditions and actions are listed in the left hand column. All the other columns in the decision table each represent a separate rule, one for each combination of conditions. We may choose to test each rule/combination and if there are only a few this will usually be the case. However, if the number of rules/combinations is large we are more likely to sample them by selecting a rich subset for testing.

If we are applying this technique thoroughly, we would have one test for each column or rule of our decision table. The advantage of doing this is that we may test a combination of things that otherwise we might not have tested and that could find a defect. However, if we have a lot of combinations, it may not be possible or sensible to test every combination. If we are time-constrained, we may not have time to test all combinations. Don’t just assume that all combinations need to be tested. It is always better to prioritize and test the most important combinations. Having the full table helps us to decide which combinations we should test and which not to test this time. In the example above all the conditions are binary, i.e. they have only two possible values: True or False (or we can say Yes or No).

38) What are Test Entry and Exit criteria?
The Entry Criteria is the process that must be present when a system begins, like,
•    SRS – Software Requirement specification document from business analyst/Client.
•    FRS/Design Document – Functional Requirement Specification document from business analyst.
•    Usecase document from clients
•    Test Strategy/ Test Plan
•    Test Design & Review

The Exit Criteria ensures whether testing is completed and the application is ready for release, like,
•    Test Summary Report.
•    Metrics.
•    Defect Analysis Report.

39) What are SDLC and STLC? Explain its different phases
SDLC
• Requirement phase
• Designing phase (HLD, DLD (Program spec))
• Coding
• Testing
• Release
• Maintenance

STLC
• System Study
• Test planning
• Writing Test case or scripts
• Review the test case
• Executing test case
• Bug tracking
• Report the defect

40) What is Use Case?
A use case is a scenario that describes the use of a system by an actor to accomplish a specific goal. An actor is a user playing a role with respect to the system. Actors are generally people although other computer systems may be actors. A scenario is a sequence of steps that describe the interactions between an actor and the system. The use case model consists of the collection of all actors and all use cases. Use cases help us capture the
• System’s functional requirements from the users' perspective
• Actively involve users in the requirements-gathering process
• Provide the basis for identifying major classes and their relationships
• Serve as the foundation for developing system test cases

41) What do you mean by coverage and what are the different types of coverage techniques?
Test coverage measures the amount of testing performed by a set of test. Wherever we can count things and can tell whether or not each of those things has been tested by some test, then we can measure coverage and is known as test coverage.
The basic coverage measure is where the ‘coverage item’ is whatever we have been able to count and see whether a test has exercised or used this item.

There is danger in using a coverage measure. But, 100% coverage does not mean 100% tested. Coverage techniques measure only one dimension of a multi-dimensional concept. Two different test cases may achieve exactly the same coverage but the input data of one may find an error that the input data of the other doesn’t.

Benefit of code coverage measurement:
* It creates additional test cases to increase coverage
* It helps in finding areas of a program not exercised by a set of test cases
* It helps in determining a quantitative measure of code coverage, which indirectly measure the quality of the application or product.

Drawback of code coverage measurement:
* One drawback of code coverage measurement is that it measures coverage of what has been written, i.e. the code itself; it cannot say anything about the software that has not been written.
* If a specified function has not been implemented or a function was omitted from the specification, then structure-based techniques cannot say anything about them it only looks at a structure which is already there.


There are many types of test coverage. Test coverage can be used in any level of the testing. Test coverage can be measured based on a number of different structural elements in a system or component. Coverage can be measured at component testing level, integration-testing level or at system- or acceptance-testing levels. For example, at system or acceptance level, the coverage items may be requirements, menu options, screens, or typical business transactions. At integration level, we could measure coverage of interfaces or specific interactions that have been tested.

We can also measure coverage for each of the specification-based techniques:
* EP: percentage of equivalence partitions exercised (we could measure valid and invalid partition coverage separately if this makes sense);
* BVA: percentage of boundaries exercised (we could also separate valid and invalid boundaries if we wished);
* Decision tables: percentage of business rules or decision table columns tested;
* State transition testing: there are a number of possible coverage measures:
o Percentage of states visited
o Percentage of (valid) transitions exercised (this is known as Chow’s 0- switch coverage)
o Percentage of pairs of valid transitions exercised (‘transition pairs’ or
o Chow’s 1-switch coverage) – and longer series of transitions, such as transition triples, quadruples, etc.
o Percentage of invalid transitions exercised (from the state table).

The coverage measures for specification-based techniques would apply at whichever test level the technique has been used (e.g. system or component level).

The different types of coverage are:
1)     Statement coverage
2)     Decision coverage
3)     Condition coverage

Statement coverage:
* The statement coverage is also known as line coverage or segment coverage.
* The statement coverage covers only the true conditions.
* Through statement coverage we can identify the statements executed and where the code is not executed because of blockage.
* In this process each and every line of code needs to be checked and executed

Advantage of statement coverage:
* It verifies what the written code is expected to do and not to do
* It measures the quality of code written
* It checks the flow of different paths in the program and it also ensure that whether those path are tested or not.

Disadvantage of statement coverage:
* It cannot test the false conditions.
* It does not report that whether the loop reaches its termination condition.
* It does not understand the logical operators.

The statement coverage can be calculated as shown below:

Decision coverage:
* Decision coverage also known as branch coverage or all-edges coverage.
* It covers both the true and false conditions unlikely the statement coverage.
* A branch is the outcome of a decision, so branch coverage simply measures which decision outcomes have been tested. This sounds great because it takes a more in-depth view of the source code than simple statement coverage
* A decision is an IF statement, a loop control statement (e.g. DO-WHILE or REPEAT-UNTIL), or a CASE statement, where there are two or more outcomes from the statement. With an IF statement, the exit can either be TRUE or FALSE, depending on the value of the logical condition that comes after IF.

Advantages of decision coverage:
* To validate that all the branches in the code are reached
* To ensure that no branches lead to any abnormality of the program’s operation
* It eliminate problems that occur with statement coverage testing

Disadvantages of decision coverage:
* This metric ignores branches within boolean expressions which occur due to short-circuit operators.

The decision coverage can be calculated as given below

Condition coverage:
* This is closely related to decision coverage but has better sensitivity to the control flow.
* However, full condition coverage does not guarantee full decision coverage.
* Condition coverage reports the true or false outcome of each condition.
* Condition coverage measures the conditions independently of each other.

Other control-flow code-coverage measures include linear code sequence and jump (LCSAJ) coverage, multiple condition coverage (also known as condition combination coverage) and condition determination coverage (also known as multiple condition decision coverage or modified condition decision coverage, MCDC). This technique requires the coverage of all conditions that can affect or determine the decision outcome.

42)  What is Test Gap Analysis?
In terms of testing, gap analysis is to identify whether the entire base lined requirements defined by the business analyst and implicit requirements of the defined requirements are covered in the design. If anything found missed in the design document, defect/s or concern can be raised against the design team. This could be called as Pre-QA defects. Gap analysis is a way to identify the static defects.

Benefits:
* Static defects require less cost or even no cost to fix rather than the dynamic defects.
* Minimizes the design change for the defects raised during the test execution
* Possibility of missing a requirement would be minimal

43)  What is the difference between Use Case and Test Case?
Use Case is a series of steps an actor performs on a system to achieve a goal.
Use Case is written in Business Requirement Specification Document (BRS) by the Business Analyst/Client Interface. It is for understanding the functionality to be designed

Test case describes action, event, input requirements, and actual result. Test cases have test case name, test case identifier, and Expected result. Test case is written based on Software Requirement Specification Document (SRS) by the Test Engineer. The same person who has written the test case may execute them or the other person.

44) How you will do Risk Analysis during software testing?
There are many techniques to analyze the testing. They are:
* One technique for risk analysis is a close reading of the requirements specification, design specifications, user documentation and other items.
* Another technique is brainstorming with many of the project stakeholders.
* Another is a sequence of one-on-one or small-group sessions with the business and technology experts in the company.
* Some people use all these techniques when they can. To us, a team-based approach that involves the key stakeholders and experts is preferable to a purely document-based approach, as team approaches draw on the knowledge, wisdom and insight of the entire team to determine what to test and how much.

The scales used to rate possibility and impact vary. Some people rate them high, medium and low. Some use a 1-10 scale. The problem with a 1-10 scale is that it’s often difficult to tell a 2 from a 3 or a 7 from an 8, unless the differences between each rating are clearly defined. A five-point scale (very high, high, medium, low and very low) tends to work well.

Let us also discuss some risks which occur usually along with some options for managing them:
* Logistics or product quality problems that block tests: These can be made moderate by careful planning, good defect triage and management, and robust test design.
* Test items that won’t install in the test environment: These can be mitigated through smoke (or acceptance) testing prior to starting test phases or as part of a nightly build or continuous integration. Having a defined uninstall process is a good contingency plan.
* Excessive change to the product that invalidates test results or requires updates to test cases, expected results and environments: These can be mitigated through good change-control processes, robust test design and light weight test documentation. When severe incidents occur, transference of the risk by escalation to management is often in order.
* Insufficient or unrealistic test environments that yield misleading results: One option is to transfer the risks to management by explaining the limits on test results obtained in limited environments. Mitigation – sometimes complete alleviation – can be achieved by outsourcing tests such as performance tests that are particularly sensitive to proper test environments.

Let us also go through some additional risks and perhaps ways to manage them:
* Organizational issues such as shortages of people, skills or training, problems with communicating and responding to test results, bad expectations of what testing can achieve and complexity of the project team or organization.
* Supplier issues such as problems with underlying platforms or hardware, failure to consider testing issues in the contract or failure to properly respond to the issues when they arise.
* Technical issues related to ambiguous, conflicting or unprioritized requirements, an excessively large number of requirements given other project constraints, high system complexity and quality problems with the design, the code or the tests.
It is really very important to keep in mind that not all projects are subject to the same risks.
Finally, we should not forget that even test items can also have risks associated with them.
For example, there is a risk that the test plan will omit tests for a functional area or that the test cases do not exercise the critical areas of the system.

By using a test plan template like the IEEE 829 template shown earlier, you can remind yourself to consider and manage risks during the planning phase. It is worth repeating at early stage of the project because risks at this point of time are educated guesses. Some of those guesses might be wrong. Make sure that you plan to re-assess and adjust your risks at regular intervals in the project and make appropriate course corrections to the testing or the project itself.

You should manage risks appropriately, based on likelihood and impact. Separate the risks by understanding how much of your overall effort can be spent dealing with them.

45) What are Software Testing Levels?
Testing levels are basically to identify missing areas and prevent overlap and repetition between the development life cycle phases. In software development life cycle models there are defined phases like requirement gathering and analysis, design, coding or implementation, testing and deployment.  Each phase goes through the testing. Hence there are various levels of testing. The various levels of testing are:

* Unit testing
* Component testing
* Integration testing
- Big bang integration testing
- Top down
- Bottom up
- Functional incremental
* Component integration testing
* System integration testing
* System testing
* Acceptance testing
* Alpha testing
* Beta testing

46) What is the difference between Severity and Priority?
Severity:
It is the extent to which the defect can affect the software. In other words it defines the impact that a given defect has on the system. For example: If an application or web page crashes when a remote link is clicked, in this case clicking the remote link by an user is rare but the impact of  application crashing is severe. So the severity is high but priority is low.

Severity can be of following types:
* Critical: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable and there is no acceptable alternative method to achieve the required results then the severity will be stated as critical.
* Major: The defect that results in the termination of the complete system or one or more component of the system and causes extensive corruption of the data. The failed function is unusable but there exists an acceptable alternative method to achieve the required results then the severity will be stated as major.
* Moderate: The defect that does not result in the termination, but causes the system to produce incorrect, incomplete or inconsistent results then the severity will be stated as moderate.
* Minor: The defect that does not result in the termination and does not damage the usability of the system and the desired results can be easily obtained by working around the defects then the severity is stated as minor.
* Cosmetic: The defect that is related to the enhancement of the system where the changes are related to the look and field of the application then the severity is stated as cosmetic.

Priority:
Priority defines the order in which we should resolve a defect. Should   we fix it now, or can it wait? This priority status is set by the tester to the developer mentioning the time frame to fix the defect. If high priority is mentioned then the developer has to fix it at the earliest. The priority status is set based on the customer requirements. For example: If the company name is misspelled in the home page of the website, then the priority is high and severity is low to fix it.

Priority can be of following types:
* Low: The defect is an irritant which should be repaired, but repair can be deferred until after more serious defect has been fixed.
* Medium: The defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
* High: The defect must be resolved as soon as possible because the defect is affecting the application or the product severely. The system cannot be used until the repair has been done.

Few very important scenarios related to the severity and priority:
* High Priority & High Severity: An error which occurs on the basic functionality of the application and will not allow the user to use the system. (Eg. A site maintaining the student details, on saving record if it, doesn’t allow to save the record then this is high priority and high severity bug.)
* High Priority & Low Severity: The spelling mistakes that happens on the cover page or heading or title of an application.
* High Severity & Low Priority: An error which occurs on the functionality of the application (for which there is no workaround) and will not allow the user to use the system but on click of link which is rarely used by the end user.
* Low Priority and Low Severity: Any cosmetic or spelling issues which is within a paragraph or in the report (Not on cover page, heading, title).

47) What is the difference between Bug, Defect and Error?
A defect is an error or a bug, in the application which is created. A programmer while designing and building the software can make mistakes or error. These mistakes or errors mean that there are flaws in the software. These are called defects.
Hence, any deviation from the specification mentioned in the product functional specification document is a defect.

Bug: It is found in the development environment before the product is shipped to the respective customer.
Error: It is the Deviation from actual and the expected value.
Defect: It is found in the product itself after it is shipped to the respective customer.

48) What are the different stages of Bug Life Cycle?
Defect life cycle is a cycle which a defect goes through during its lifetime. It starts when defect is found and ends when a defect is closed, after ensuring it’s not reproduced. Defect life cycle is related to the bug found during testing.

The bug has different states in the Life Cycle. The Life cycle of the bug can be shown diagrammatically as follows:

Bug or defect life cycle includes following steps or status:
* New:  When a defect is logged and posted for the first time. It’s state is given as new.
* Assigned:  After the tester has posted the bug, the lead of the tester approves that the bug is genuine and he assigns the bug to corresponding developer and the developer team. It’s state given as assigned.
* Open:  At this state the developer has started analyzing and working on the defect fix.
* Fixed:  When developer makes necessary code changes and verifies the changes then he/she can make bug status as ‘Fixed’ and the bug is passed to testing team.
* Pending retest:  After fixing the defect the developer has given that particular code for retesting to the tester. Here the testing is pending on the testers end. Hence its status is pending retest.
* Retest:  At this stage the tester do the retesting of the changed code which developer has given to him to check whether the defect got fixed or not.
* Verified:  The tester tests the bug again after it got fixed by the developer. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “verified”.
* Reopen:  If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “reopened”. The bug goes through the life cycle once again.
* Closed:  Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “closed”. This state means that the bug is fixed, tested and approved.
* Duplicate: If the bug is repeated twice or the two bugs mention the same concept of the bug, then one bug status is changed to “duplicate“.
* Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “rejected”.
* Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bug may not have major effect on the software.
* Not a bug:  The state given as “Not a bug” if there is no change in the functionality of the application. For an example: If customer asks for some change in the look and field of the application like change of colour of some text then it is not a bug but just some change in the looks of the  application.

49)  What is Defect Leakage?
Defect leakage occurs at the Customer or the End user side after the application delivery. After the release of the application to the client, if the end user gets any type of defects by using that application then it is called as Defect leakage. This Defect Leakage is also called as Bug Leakage.

But not all the defects can in turn be termed as defect leakage as the defects would be related to environment, deployment etc. Once the defects after production are sent to the QA team, an analysis is done to check if the issue is really missed by QA or is environmental or deployment issue. Root Cause Analysis is carried out on the defects which are missed by QA and then the Defect Prevention Plan is prepared.

50) What is the cost of Defects in software testing?
If the error is made and the consequent defect is detected in the requirements phase then it is relatively cheap to fix it.
Similarly if an error is made and the consequent defect is found in the design phase then the design can be corrected and reissued with relatively little expense.

The same applies for construction phase. If however, a defect is introduced in the requirement specification and it is not detected until acceptance testing or even once the system has been implemented then it will be much more expensive to fix. This is because rework will be needed in the specification and design before changes can be made in construction; because one defect in the requirements may well propagate into several places in the design and code; and because all the testing work done-to that point will need to be repeated in order to reach the confidence level in the software that we require.

It is quite often the case that defects detected at a very late stage, depending on how serious they are, are not corrected because the cost of doing so is too expensive.

51) From where do the defects and failures in Software testing arise?
Defects and failures basically arise from:
* Errors in the specification, design and implementation of the software and system
* Errors in use of the system
* Environmental conditions
* Intentional damage
* Potential consequences of earlier errors

52)  If you have shortage of time, how would you prioritize you testing?
Prioritization is based on the following perimeters:
•    New functionalities/ features in the product which will be used by clients.
•    Areas and parallel areas where maximum numbers of defects were found and fixed.
•    Areas where testing is not covered completely

53) What if there isn't enough time for thorough testing?
Since it's rarely possible to test every possible aspect of an application, every possible combination of events, every dependency, or everything that could go wrong, risk analysis is appropriate to most software development projects. Use risk analysis to determine where testing should be focused. This requires judgment skills, common sense and experience.

The checklist should include answers to the following questions:

• Which functionality is most important to the project's intended purpose?
• Which functionality is most visible to the user?
• Which functionality has the largest safety impact?
• Which functionality has the largest financial impact on users?
• Which aspects of the application are most important to the customer?
• Which aspects of the application can be tested early in the development cycle?
• Which parts of the code are most complex and thus most subject to errors?
• Which parts of the application were developed in rush or panic mode?
• Which aspects of similar/related previous projects caused problems?
• Which aspects of similar/related previous projects had large maintenance expenses?
• Which parts of the requirements and design are unclear or poorly thought out?
• What do the developers think are the highest-risk aspects of the application?
• What kinds of problems would cause the worst publicity?
• What kinds of problems would cause the most customer service complaints?
• What kinds of tests could easily cover multiple functionalities? Which tests will have the best high-risk-coverage to time-required ratio?

54) How do you decide when you have 'tested enough’? OR How can it be known when to stop testing?
Common factors in deciding when to stop are,
• Deadlines (release deadlines, testing deadlines, etc.)
• Test cases completed with certain percentage passed
• Test budget depleted
• Coverage of code/functionality/requirements reaches a specified point
• Bug rate falls below a certain level
• Beta or alpha testing period ends


55) What is Automated Testing good for?
Automated testing is particularly good at:
• Load and performance testing – automated tests are a prerequisite of conducting load and performance testing. It is not feasible to have 300 users manually test a system simultaneously, it must be automated.
• Smoke testing – a quick and dirty test to confirm that the system ‘basically’ works. A system which fails a smoke test is automatically sent back to the previous stage before work is conducted, saving time and effort
• Regression testing – testing functionality that should not have changed in a current release of code. Existing automated tests can be run and they will highlight changes in the functionality they have been designed to test (in incremental development builds can be quickly tested and reworked if they have altered functionality delivered in previous increments)
• Setting up test data or pre-test conditions – an automated test can be used to set-up test data or test conditions which would otherwise be time consuming
• Repetitive testing which includes manual tasks that are tedious and prone to human error (e.g. checking account balances to 7 decimal places)

No comments:

Post a Comment

/**/