Showing posts with label testing. Show all posts
Showing posts with label testing. Show all posts

Wednesday, April 14, 2010

Test-driven development cycle (TDD)

Test-driven development (TDD) is a software development technique that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally improves the new code to acceptable standards.



1. Add a test

In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented. (If it does not fail, then the proposed “new” feature is obviated.) To write a test, the developer must clearly understand the feature's specification and requirements. The developer can accomplish this through use cases that cover the requirements and exception conditions. This could also imply a variant, or modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.


2. Run all tests and see if the new one fails

This validates that the test harness is working correctly and that the new test does not mistakenly pass without requiring any new code. This step also tests the test itself, in the negative: it rules out the possibility that the new test will always pass, and therefore be worthless.
The new test should also fail for the expected reason. This increases confidence (although it does not entirely guarantee) that it is testing the right thing, and will pass only in intended cases.


3. Write some code

The next step is to write some code that will cause the test to pass. The new code written at this stage will not be perfect and may, for example, pass the test in an inelegant way. That is acceptable because later steps will improve and hone it.
It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality should be predicted and 'allowed for' at any stage.


4. Run the automated tests and see them succeed

If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a good point from which to begin the final step of the cycle.


5. Refactor code

Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to removing any duplication between the test code and the production code — for example strings that were repeated in both, in order to make the test pass in step 3

Source-wiki

Bottom Up Integration Testing


In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. Bottom up integration also uses test drivers to drive and pass appropriate data to the lower level modules. As and when code for other module gets ready, these drivers are replaced with the actual module. In this approach, lower level modules are tested extensively thus make sure that highest used module is tested properly.
Advantages
1. Behavior of the interaction points are crystal clear, as components are added in the controlled manner and tested repetitively.
2. Appropriate for applications where bottom up design methodology is used.

Disadvantages
1. Writing and maintaining test drivers or harness is difficult than writing stubs.
2. This approach is not suitable for the software development using top down approach.

    Friday, April 9, 2010

    TESTING TECHNIQUES

    Classified into two categories->

    1. Black Box Testing.

    2. White Box Testing.



    Black Box Testing is done through following techniques:
    1. Error Guessing
    2. Equivalent Partioning
    3. Boundary Value Analysis
    4. Cause-effect Graphing

    White Box Testing is done through following techniques:
    1. Statement coverage.
    2. Branch/Loop Coverage.
    3. Flow coverage.
    4. Condition coverage
    5. Data flow coverage
    6. Decision coverage

    RETEST VS REGRESSION TESTING

    RETEST:

    Retest is the process of checking whether the reported bugs are been fixed or not by the development team.
    Purpose: To identify whether the given bugs /issues/ defects are fixed or not.




    REGRESSION TESTING:


    Regression Testing: Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure

    Purpose: To identify whether on fixation of the issues / bugs / defects new issues get introduced into the system.

    SMOKE TESTING & SANITY TESTING

    Smoke testing is done by testers before accepting a build for further testing. It is believed that after the code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software.


    smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a Build Verification Tests.


    This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things?"


    Software Sanity Tests are commonly conflated with smoke tests. A smoke test determines whether it is possible to continue testing, as opposed to whether it is reasonable. A software smoke test determines whether the program launches and whether its interfaces are accessible and responsive (for example, the responsiveness of a web page or an input button). If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct (for example, an interest rate calculation for a financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests and unit tests on an automated builds as part of their development process


    a SANITY test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that the system or methodology works as expected, often prior to a more exhaustive round of testing.

    REGRESSION TESTING

    Regression testing is any type of software testing that seeks to uncover software errors by partially retesting a modified program. The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other problems. Regression testing is commonly used to efficiently test the system by systematically selecting the appropriate minimum suite of tests needed to adequately cover the affected change. Common methods of regression testing include rerunning previously run tests and checking whether previously fixed faults have re-emerged. "One of the main reasons for regression testing is that it's often extremely difficult for a programmer to figure out how a change in one part of the software will echo in other parts of the software."   --- Wiki


    Any time you modify an implementation within a program, you should also do regression testing. You can do so by rerunning existing tests against the modified code to determine whether the changes break anything that worked prior to the change and by writing new tests where necessary. Adequate coverage without wasting time should be a primary consideration when conducting regression tests. Try to spend as little time as possible doing regression testing without reducing the probability that you will detect new failures in old, already tested code.  -- msdn


    Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes. Regression testing is a normal part of the program development process and, in larger companies, is done by code testing specialists. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. These test cases form what becomes the test bucket. Before a new version of a software product is released, the old test cases are run against the new version to make sure that all the old capabilities still work. The reason they might not work is because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed.  --

    Tuesday, April 6, 2010

    STLC


    STLC: Software Test Life Cycle
    1.Test Strategies
    2.Test Plan
    3.Test Cases
    4.Execution Test Cases
    5.Bug Analysis
    6.Reporting the Bug
    7.Regression Testing

    Tuesday, March 9, 2010

    Error Vs Bug Vs Defect

    Error : Deviation for actual and the expected/theoritical value .

    Bug : An Error found in the development environment before the product is shipped to the customer .

    Defect : An Error found in the product itself after it is shipped to the customer .


     Also, check out Free ISTQB Training Material here

    www.testing4success.com - Application QA/Testing     Mobile App QA/Testing     Web Testing     Training
    Iphone App QA  Android App QA  Web QA 

    Sunday, February 28, 2010

    Traceability Matrix


    A method used to validate the compliance of a process or product with the requirements for that process or product.  The requirements are each listed in a row of the matrix and the columns of the matrix are used to identify how and where each requirement has been addressed.

    A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it.

    In a software development process, a traceability matrix is a table that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.

    Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the item is too complex and should be simplified.

    To ease with the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one baselined document, it's easy to see what needs to be changed in the other.

    Useful Traceability Matrices-

    • Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.
    • Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.
    • Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.
    • Design specification to functional specification verifies that each function has been covered in the design.
    • System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.



    Wednesday, February 10, 2010

    TEST STRATEGY VS TEST PLAN

    Test Strategy:
    It is a company level document and developed by QA category people like QA and PM. This document defines "Testing Approach" to achieve testing objective. Test strategy is the freezed part of BRS from which we get Test Policy and Test Strategy.

    Components in the Test Strategy are as follows:
    1. Scope and objective
    2. Business issues
    3. Roles and responsibilities
    4. Communication and status reporting
    5. Test deliverability
    6. Test approach
    7. Test automation and tools
    8. Testing measurements and metrices
    9. Risks and mitigation
    10. Defect reporting and tracking
    11. Change and configuration management
    12. Training plan

    Test Plan:
    Test plan is the freezed document developed from SRS, FS, UC. After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test.
    There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
    Components are as follows:

    1. Test Plan id
    2. Introduction
    3. Test items
    4. Features to be tested
    5. Features not to be tested
    6. Approach
    7. Testing tasks
    8. Suspension criteria
    9. Features pass or fail criteria
    10. Test environment (Entry criteria, Exit criteria)
    11. Test delivarables
    12. Staff and training needs
    13. Responsibilities
    14. Schedule
    15. Risk and mitigation
    16. Approach

    This is one of the standard approach to prepare test plan document, but things can vary company-to-company.



    A Test Plan describes the approach, Features to be tested, Testers assigned, and whatever you plan for your project. A Test Plan is usually prepared by Manager or Team Lead. That is true but not exclusively. It depends on what the test plan is intended for. Some companies have defined a test plan as being what most would consider a test case. Meaning that it is for one part of the functionality validation.

    A strategy is how you are going to address testing for the project. Some companies have a strategy or approach section in the test plan, others have a seperate document.

    In some companies a general strategy document is created. This outlines what is expected/advised (depending the companies policy) to do. The test plan documents at this moment are based on the organizational strategy and as such are more light weight. 

    Sunday, January 31, 2010

    PRIORITY VS SEVERITY

    “Priority” is associated with scheduling, and “severity” is associated with standards and criticality.
    “Priority” means something is afforded or deserves prior attention; a precedence
    established by order of importance (or urgency). “Severity” is the state or quality of
    being severe; severe implies adherence to rigorous standards or high principles and
    often suggests harshness; severe is marked by or requires strict adherence to
    rigorous standards or high principles, e.g. a severe code of behavior. 

    Exploratory Testing & Adhoc Testing

    Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of inventing test cases and finding defects. The more the tester knows about the product and different test methods, the better the testing will be.



    Ad hoc testing is a commonly used term for software testing performed without planning and documentation.

    The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. Ad hoc testing is most often used as a complement to other types of testing.

    REVIEW PROCESS


    Review process

    The different types of reviews vary from very informal (e.g. no written instructions for reviewers) to
    very formal (i.e. well structured and regulated). The formality of a review process is related to
    factors such as the maturity of the development process, any legal or regulatory requirements or the
    need for an audit trail.The way a review is carried out depends on the agreed objective of the review (e.g. find defects, gain understanding, or discussion and decision by consensus).

    Phases of a formal review
    A typical formal review has the following main phases:
    1. Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more
    formal review types (e.g. inspection); and selecting which parts of documents to look at.


    2. Kick-off: distributing documents; explaining the objectives, process and documents to the
    participants; and checking entry criteria (for more formal review types).

    3. Individual preparation: work done by each of the participants on their own before the review
    meeting, noting potential defects, questions and comments.

    4. Review meeting: discussion or logging, with documented results or minutes (for more formal
    review types). The meeting participants may simply note defects, make recommendations for
    handling the defects, or make decisions about the defects.

    5. Rework: fixing defects found, typically done by the author.

    6. Follow-up: checking that defects have been addressed, gathering metrics and checking on exit
    criteria (for more formal review types).

    Roles and responsibilities:

    A typical formal review will include the roles below:
    • Manager: decides on the execution of reviews, allocates time in project schedules and
      determines if the review objectives have been met.
    • Moderatorthe person who leads the review of the document or set of documents, including
      planning the review, running the meeting, and follow-up after the meeting. If necessary, the
      moderator may mediate between the various points of view and is often the person upon whom the success of the review rests.
    • Author: the writer or person with chief responsibility for the document(s) to be reviewed.
    • Reviewers: individuals with a specific technical or business background (also called checkers or inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in the product under review. Reviewers should be chosen to represent different perspectives and roles in the review process, and should take part in any review meetings.
    • Scribe (or recorder): documents all the issues, problems and open points that were identified
      during the meeting.
    Looking at documents from different perspectives and using checklists can make reviews more
    effective and efficient, for example, a checklist based on perspectives such as user, maintainer,
    tester or operations, or a checklist of typical requirements problems.

    Types of review:
    A single document may be the subject of more than one review. If more than one type of review is
    used, the order may vary. For example, an informal review may be carried out before a technical
    review, or an inspection may be carried out on a requirements specification before a walkthrough
    with customers. The main characteristics, options and purposes of common review types are:

    Informal review
    Key characteristics:
    • no formal process;
    • there may be pair programming or a technical lead reviewing designs and code;
    • optionally may be documented;
    • may vary in usefulness depending on the reviewer;
    • main purpose: inexpensive way to get some benefit.
    Walkthrough
    Key characteristics:
    • meeting led by author;
    • scenarios, dry runs, peer group;
    • open-ended sessions;
    • optionally a pre-meeting preparation of reviewers, review report, list of findings and scribe (who is not the author);
    • may vary in practice from quite informal to very formal;
    • main purposes: learning, gaining understanding, defect finding.
    Technical review

    Key characteristics:
    • documented, defined defect-detection process that includes peers and technical experts;
    • may be performed as a peer review without management participation;
    • ideally led by trained moderator (not the author);
    • pre-meeting preparation;
    • optionally the use of checklists, review report, list of findings and management participation may vary in practice from quite informal to very formal;
    • main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical
      problems and check conformance to specifications and standards.
    Inspection
    Key characteristics:
    • led by trained moderator (not the author);
    • usually peer examination;
    • defined roles;
    • includes metrics;
    • formal process based on rules and checklists with entry and exit criteria;
    • pre-meeting preparation;
    • inspection report, list of findings;
    • formal follow-up process;
    • optionally, process improvement and reader;
    • main purpose: find defects.
    Walkthroughs, technical reviews and inspections can be performed within a peer group –
    colleagues at the same organizational level. This type of review is called a “peer review”.


    Success factors for reviews:
    Success factors for reviews include:
    • Each review has a clear predefined objective.
    • The right people for the review objectives are involved.
    • Defects found are welcomed, and expressed objectively.
    • People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author).
    • Review techniques are applied that are suitable to the type and level of software work products and reviewers.
    • Checklists or roles are used if appropriate to increase effectiveness of defect identification.
    • Training is given in review techniques, especially the more formal techniques, such as
      inspection.
    • Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules).
    • There is an emphasis on learning and process improvement.

    Software VERIFICATION VS VALIDATION

    Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.



    Verification:  was the product built right?

    Validation: was the right product built?


    In simple language, we can say that Verification is the pre-testing reviews, walk throughs, evaluations of the documents, plans and code, whereas Validation is the actual testing, finding the defects in the software.


    Bug Life Cycle


    In SDLC the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized.The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


    The different states of a bug can be summarized as follows:

    1. New
    2. Open
    3. Assign
    4. Test
    5. Verified
    6. Deferred
    7. Reopened
    8. Duplicate
    9. Rejected and
    10. Closed

    Description of Various Stages:

    1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

    2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

    3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

    4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testingteam.

    5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bugmay not have major effect on the software.

    6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

    7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept ofthe bug, then one bug status is changed to “DUPLICATE”.

    8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester teststhe bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

    9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

    10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

    While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

    Performance Vs Load Vs Stress Testing


    Performance vs. load vs. stress testing


    Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms interchangeably, but they have in fact quite different meanings. 

    Performance testing

    The goal of 
    performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.

    A clearly defined set of expectations is essential for meaningful 
    performance testing. If you don't know where you want to go in terms of the performance of the system, then it matters little which direction you take (remember Alice and the Cheshire Cat?). For example, for a Web application, you need to know at least two things:
    • expected load in terms of concurrent users or HTTP connections
    • acceptable response time
    Once you know where you want to be, you can start on your way there by constantly increasing the load on the system while looking for bottlenecks. To take again the example of a Web application, these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:
    • at the application level, developers can use profilers to spot inefficiencies in their code (for example poor search algorithms)
    • at the database level, developers and DBAs can use database-specific profilers and query optimizers
    • at the operating system level, system engineers can use utilities such as top, vmstat, iostat (on Unix-type systems) and PerfMon (on Windows) to monitor hardware resources such as CPU, memory, swap, disk I/O; specialized kernel monitoring software can also be used
    • at the network level, network engineers can use packet sniffers such as tcpdump, network protocol analyzers such as ethereal, and various utilities such as netstat, MRTG, ntop, mii-tool
    From a testing point of view, the activities described above all take a white-box approach, where the system is inspected and monitored "from the inside out" and from a variety of angles. Measurements are taken and analyzed, and as a result, tuning is done.

    However, testers also take a black-box approach in running the load tests against the system under test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and measure response times. Some lightweight open source tools I've used in the past for this purpose are 
    absiegehttperf. A more heavyweight tool I haven't used yet is OpenSTA. I also haven't used The Grinder yet, but it is high on my TODO list.

    When the results of the load test indicate that 
    performance of the system does not meet its expected goals, it is time for tuning, starting with the application and the database. You want to make sure your code runs as efficiently as possible and your database is optimized on a given OS/hardware configurations. TDD practitioners will find very useful in this context a framework such as Mike Clark's jUnitPerf, which enhances existing unit test code with load test and timed test functionality. Once a particular function or method has been profiled and tuned, developers can then wrap its unit tests in jUnitPerf and ensure that it meets performancerequirements of load and timing. Mike Clark calls this "continuous performance testing". I should also mention that I've done an initial port of jUnitPerf to Python -- I called it pyUnitPerf.

    If, after tuning the application and the database, the system still doesn't meet its expected goals in terms of 
    performance, a wide array of tuning procedures is available at the all the levels discussed before. Here are some examples of things you can do to enhance theperformance of a Web application outside of the application code per se:
    • Use Web cache mechanisms, such as the one provided by Squid
    • Publish highly-requested Web pages statically, so that they don't hit the database
    • Scale the Web server farm horizontally via load balancing
    • Scale the database servers horizontally and split them into read/write servers and read-only servers, then load balance the read-only servers
    • Scale the Web and database servers vertically, by adding more hardware resources (CPU, RAM, disks)
    • Increase the available network bandwidth
    Performance tuning can sometimes be more art than science, due to the sheer complexity of the systems involved in a modern Web application. Care must be taken to modify one variable at a time and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to qualify and repeat.

    In a standard test environment such as a test lab, it will not always be possible to replicate the production server configuration. In such cases, a staging environment is used which is a subset of the production environment. The expected 
    performance of the system needs to be scaled down accordingly.

    The cycle "run load test->measure 
    performance->tune system" is repeated until the system under test achieves the expected levels ofperformance. At this point, testers have a baseline for how the system behaves under normal conditions. This baseline can then be used in regression tests to gauge how well a new version of the software performs.

    Another common goal of 
    performance testing is to establish benchmark numbers for the system under test. There are many industry-standard benchmarks such as the ones published by TPC, and many hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do not include a detailed specification of all the hardware and software configurations that were used in that particular test.
    Load testing
    We have already seen load testing as part of the process of performance testing and tuning. In that context, it meant constantly increasing the load on the system via automated tools. For a Web application, the load is defined in terms of concurrent users or HTTP connections.

    In the 
    testing literature, the term "load testing" is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing.

    Examples of volume 
    testing:
    • testing a word processor by editing a very large document
    • testing a printer by sending it a very large job
    • testing a mail server with thousands of users mailboxes
    • a specific case of volume testing is zero-volume testing, where the system is fed empty tasks
    Examples of longevity/endurance testing:
    • testing a client-server application by running the client in a loop against the server over an extended period of time
    Goals of load testing:
    • expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
    • ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
    Although performance testing and load testing can seem similar, their goals are different. On one hand, performance testing uses loadtesting techniques and tools for measurement and benchmarking purposes and uses various load levels. On the other hand, loadtesting operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.

    In the context of load 
    testing, I want to emphasize the extreme importance of having large datasets available for testing. In my experience, many important bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory, thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.
    Stress testing

    Stress 
    testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.

    Where 
    performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability. To take again the example of a Web application, here are some ways in which stress can be applied to the system:
    • double the baseline number for concurrent users/HTTP connections
    • randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)
    • take the database offline, then restart it
    • rebuild a RAID array while the system is running
    • run processes that consume resources (CPU, memory, disk, network) on the Web and database servers
    I'm sure devious testers can enhance this list with their favorite ways of breaking systems. However, stress testing does not break the system purely for the pleasure of breaking it, but instead it allows testers to observe how the system reacts to failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user, or does it merely display incomprehensible hex codes? Is the security of the system compromised because of unexpected failures? And the list goes on.

    STUB VS DRIVER

    Both these terms, Stub and driver, are mainly used in Software integration testing.


    ->Stub is a piece of code emulating a called function, a driver is a piece of code emulating  a calling function.


    -> Stubs are created integration testing like Top-down approach. 
        Drivers are created integration testing like bottom-up approach.

    -> Stub: A piece of code that simulates the activity of missing component. 
        Driver: A piece of code that passes test case to another piece of code.



    Example - For Unit Testing of ‘Sales Order Printing’ program, a ‘Driver’ program will have the code which will create Sales Order records using hard coded data and then call ‘Sales Order Printing’ program. Suppose this printing program uses another unit which calculates Sales discounts by some complex calculations. Then call to this unit will be replaced by a ‘Stub’, which will simply return fix discount data.



    Also, check out Free ISTQB Training Material here

    www.testing4success.com - Application QA/Testing     Mobile App QA/Testing     Web Testing     Training
    Iphone App QA  Android App QA  Web QA





    Suppose I have a workflow in which the code functionality flows from one system to the another and then to the third one, if the second system in the flow is down then we can use a simulator for that step which will act like a Driver. Similarly, if there was some xml response which we might have been receiving from the second system than that part will be called as Stub.