Friday, April 30, 2010

DATABASE NORMALIZATION



Normalization is the process of efficiently organizing data in a database. There are two goals of the normalization process: eliminating redundant data (for example, storing the same data in more than one table) and ensuring data dependencies make sense (only storing related data in a table). Both of these are worthy goals as they reduce the amount of space a database consumes and ensure that data is logically stored.


First Normal Form (1NF)


First normal form (1NF) sets the very basic rules for an organized database:
  • Eliminate duplicative columns from the same table.
  • Create separate tables for each group of related data and identify each row with a unique column or set of columns (the primary key).
To remember the first law of normalization just remember PRIMARY KEY.



Second Normal Form (2NF)

Second normal form (2NF) further addresses the concept of removing duplicative data:
  • Meet all the requirements of the first normal form.
  • Remove subsets of data that apply to multiple rows of a table and place them in separate tables.
  • Create relationships between these new tables and their predecessors through the use of foreign keys.
To remember the first law of normalization just remember FOREIGN KEY.




Third Normal Form (3NF)

Third normal form (3NF) goes one large step further:
  • Meet all the requirements of the second normal form.
  • Remove columns that are not dependent upon the primary key.




Fourth Normal Form (4NF)

Finally, fourth normal form (4NF) has one additional requirement:
  • Meet all the requirements of the third normal form.
  • A relation is in 4NF if it has no multi-valued dependencies.
Remember, these normalization guidelines are cumulative. For a database to be in 2NF, it must first fulfill all the criteria of a 1NF database.


--source about.com

Wednesday, April 14, 2010

Top Down Testing

Top Down Testing: 


An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

USE CASE

Use Case: The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.



use case defines a goal-oriented set of interactions between external actors and the system under consideration. Actors are parties outside the system that interact with the system (UML 1999, pp. 2.113- 2.123). An actor may be a class of users, roles users can play, or other systems. Cockburn (1997) distinguishes between primary and secondary actors. A primary actor is one having a goal requiring the assistance of the system. A secondary actor is one from which the system needs assistance.

A use case is initiated by a user with a particular goal in mind, and completes successfully when that goal is satisfied. It describes the sequence of interactions between actors and the system necessary to deliver the service that satisfies the goal. It also includes possible variants of this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure to complete the service because of exceptional behavior, error handling, etc. The system is treated as a "black box", and the interactions with system, including system responses, are as perceived from outside the system.

Thus, use cases capture who (actor) does what (interaction) with the system, for what purpose (goal), without dealing with system internals. A complete set of use cases specifies all the different ways to use the system, and therefore defines all behavior required of the system, bounding the scope of the system.
Generally, use case steps are written in an easy-to-understand structured narrative using the vocabulary of the domain. This is engaging for users who can easily follow and validate the use cases, and the accessibility encourages users to be actively involved in defining the requirements.

Test-driven development cycle (TDD)

Test-driven development (TDD) is a software development technique that relies on the repetition of a very short development cycle: first the developer writes a failing automated test case that defines a desired improvement or new function, then produces code to pass that test and finally improves the new code to acceptable standards.



1. Add a test

In test-driven development, each new feature begins with writing a test. This test must inevitably fail because it is written before the feature has been implemented. (If it does not fail, then the proposed “new” feature is obviated.) To write a test, the developer must clearly understand the feature's specification and requirements. The developer can accomplish this through use cases that cover the requirements and exception conditions. This could also imply a variant, or modification of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.


2. Run all tests and see if the new one fails

This validates that the test harness is working correctly and that the new test does not mistakenly pass without requiring any new code. This step also tests the test itself, in the negative: it rules out the possibility that the new test will always pass, and therefore be worthless.
The new test should also fail for the expected reason. This increases confidence (although it does not entirely guarantee) that it is testing the right thing, and will pass only in intended cases.


3. Write some code

The next step is to write some code that will cause the test to pass. The new code written at this stage will not be perfect and may, for example, pass the test in an inelegant way. That is acceptable because later steps will improve and hone it.
It is important that the code written is only designed to pass the test; no further (and therefore untested) functionality should be predicted and 'allowed for' at any stage.


4. Run the automated tests and see them succeed

If all test cases now pass, the programmer can be confident that the code meets all the tested requirements. This is a good point from which to begin the final step of the cycle.


5. Refactor code

Now the code can be cleaned up as necessary. By re-running the test cases, the developer can be confident that refactoring is not damaging any existing functionality. The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to removing any duplication between the test code and the production code — for example strings that were repeated in both, in order to make the test pass in step 3

Source-wiki

Bottom Up Integration Testing


In bottom up integration testing, module at the lowest level are developed first and other modules which go towards the 'main' program are integrated and tested one at a time. Bottom up integration also uses test drivers to drive and pass appropriate data to the lower level modules. As and when code for other module gets ready, these drivers are replaced with the actual module. In this approach, lower level modules are tested extensively thus make sure that highest used module is tested properly.
Advantages
1. Behavior of the interaction points are crystal clear, as components are added in the controlled manner and tested repetitively.
2. Appropriate for applications where bottom up design methodology is used.

Disadvantages
1. Writing and maintaining test drivers or harness is difficult than writing stubs.
2. This approach is not suitable for the software development using top down approach.

    Friday, April 9, 2010

    TESTING TECHNIQUES

    Classified into two categories->

    1. Black Box Testing.

    2. White Box Testing.



    Black Box Testing is done through following techniques:
    1. Error Guessing
    2. Equivalent Partioning
    3. Boundary Value Analysis
    4. Cause-effect Graphing

    White Box Testing is done through following techniques:
    1. Statement coverage.
    2. Branch/Loop Coverage.
    3. Flow coverage.
    4. Condition coverage
    5. Data flow coverage
    6. Decision coverage

    Agile Methodology


    Most software development life cycle methodologies are either iterative or follow a sequential model (as the waterfall model does). As software development becomes more complex, these models cannot efficiently adapt to the continuous and numerous changes that occur. Agile methodology was developed to respond to changes quickly and smoothly. Although the iterative methodologies tend to remove the disadvantage of sequential models, they still are based on the traditional waterfall approach. Agile methodology is a collection of values, principles, and practices that incorporates iterative development, test, and feedback into a new style of development. For an overview of agile methodology, see the Agile Modeling site at http://www.agilemodeling.com/.
    The key differences between agile and traditional methodologies are as follows:
    • Development is incremental rather than sequential. Software is developed in incremental, rapid cycles. This results in small, incremental releases, with each release building on previous functionality. Each release is thoroughly tested, which ensures that all issues are addressed in the next iteration.
    • People and interactions are emphasized, rather than processes and tools. Customers, developers, and testers constantly interact with each other. This interaction ensures that the tester is aware of the requirements for the features being developed during a particular iteration and can easily identify any discrepancy between the system and the requirements.
    • Working software is the priority rather than detailed documentation. Agile methodologies rely on face-to-face communication and collaboration, with people working in pairs. Because of the extensive communication with customers and among team members, the project does not need a comprehensive requirements document.
    • Customer collaboration is used, rather than contract negotiation. All agile projects include customers as a part of the team. When developers have questions about a requirement, they immediately get clarification from customers.
    • Responding to change is emphasized, rather than extensive planning. Extreme Programming does not preclude planning your project. However, it suggests changing the plan to accommodate any changes in assumptions for the plan, rather than stubbornly trying to follow the original plan.
    Agile methodology has various derivate approaches, such as Extreme Programming, Dynamic Systems Development Method (DSDM), and SCRUM. Extreme Programming is one of the most widely used approaches.




    Agile Testing: Example

    Agile methodology with Extreme Programming and test-driven development was used to develop the Smart Client Offline Application Block. The following are highlights of the approach taken on the project:
    • The test team and the development team were not formally separated. The developers worked in pairs, with one person developing the test cases and the other writing the functionality for the module.
    • There was much more interaction among team members than there is when following a traditional development model. In addition to using the informal chat-and-develop mode, the team held a 30 minute daily standup meeting, which gave team members a forum for asking questions and resolving problems, and weekly iterative review meetings to track the progress for each iterative cycle.
    • Project development began without any formal design document. The specifications were in the form of user stories that were agreed upon by the team members. In the weekly iterative review meetings, team members planned how to complete these stories and how many iterations to assign for each story.
    • Each story was broken down into several tasks. All of the stories and corresponding tasks were written down on small cards that served as the only source of design documentation for the application block.
    • While developing each task or story, NUnit test suites were written to drive the development of features.
    • No formal test plans were developed. The testing was primarily based on the tasks or stories for feature development. The development team got immediate feedback from the test team. Having the test team create the quick start samples gave the development team a perspective on the real-life usage of the application block.
    • After the task or story passed all of the NUnit test cases and was complete, quick start samples were developed to showcase the functionality. The quick start samples demonstrated the usage of the application block and were useful for further testing the code in the traditional way (functional and integration tests). Any discrepancies found in this stage were reported immediately and were fixed on a case-by-case basis. The modified code was tested again with the automated test suites and then was handed over to be tested again with the quick start samples.

    RETEST VS REGRESSION TESTING

    RETEST:

    Retest is the process of checking whether the reported bugs are been fixed or not by the development team.
    Purpose: To identify whether the given bugs /issues/ defects are fixed or not.




    REGRESSION TESTING:


    Regression Testing: Testing conducted for the purpose of evaluating whether or not a change to the system (all CM items) has introduced a new failure

    Purpose: To identify whether on fixation of the issues / bugs / defects new issues get introduced into the system.

    SMOKE TESTING & SANITY TESTING

    Smoke testing is done by testers before accepting a build for further testing. It is believed that after the code reviews, smoke testing is the most cost effective method for identifying and fixing defects in software.


    smoke test is a collection of written tests that are performed on a system prior to being accepted for further testing. This is also known as a Build Verification Tests.


    This is a "shallow and wide" approach to the application. The tester "touches" all areas of the application without getting too deep, looking for answers to basic questions like, "Can I launch the test item at all?", "Does it open to a window?", "Do the buttons on the window do things?"


    Software Sanity Tests are commonly conflated with smoke tests. A smoke test determines whether it is possible to continue testing, as opposed to whether it is reasonable. A software smoke test determines whether the program launches and whether its interfaces are accessible and responsive (for example, the responsiveness of a web page or an input button). If the smoke test fails, it is impossible to conduct a sanity test. In contrast, the ideal sanity test exercises the smallest subset of application functions needed to determine whether the application logic is generally functional and correct (for example, an interest rate calculation for a financial application). If the sanity test fails, it is not reasonable to attempt more rigorous testing. Both sanity tests and smoke tests are ways to avoid wasting time and effort by quickly determining whether an application is too flawed to merit any rigorous testing. Many companies run sanity tests and unit tests on an automated builds as part of their development process


    a SANITY test is a very brief run-through of the functionality of a computer program, system, calculation, or other analysis, to assure that the system or methodology works as expected, often prior to a more exhaustive round of testing.

    Defect Management

    The elimination of bugs in software and flaws in hardware. Defect management is part of a software or hardware development project.


    Software defects are expensive.  Moreover, the cost of finding and correcting defects represents one of the most expensive software development activities.  For the foreseeable future, it will not be possible to eliminate defects.  While defects may be inevitable, we can minimize their number and impact on our projects.  To do this development teams need to implement a defect management process that focuses on preventing defects, catching defects as early in the process as possible, and minimizing the impact of defects.  A little investment in this process can yield significant returns.

    REGRESSION TESTING

    Regression testing is any type of software testing that seeks to uncover software errors by partially retesting a modified program. The intent of regression testing is to provide a general assurance that no additional errors were introduced in the process of fixing other problems. Regression testing is commonly used to efficiently test the system by systematically selecting the appropriate minimum suite of tests needed to adequately cover the affected change. Common methods of regression testing include rerunning previously run tests and checking whether previously fixed faults have re-emerged. "One of the main reasons for regression testing is that it's often extremely difficult for a programmer to figure out how a change in one part of the software will echo in other parts of the software."   --- Wiki


    Any time you modify an implementation within a program, you should also do regression testing. You can do so by rerunning existing tests against the modified code to determine whether the changes break anything that worked prior to the change and by writing new tests where necessary. Adequate coverage without wasting time should be a primary consideration when conducting regression tests. Try to spend as little time as possible doing regression testing without reducing the probability that you will detect new failures in old, already tested code.  -- msdn


    Regression testing is the process of testing changes to computer programs to make sure that the older programming still works with the new changes. Regression testing is a normal part of the program development process and, in larger companies, is done by code testing specialists. Test department coders develop code test scenarios and exercises that will test new units of code after they have been written. These test cases form what becomes the test bucket. Before a new version of a software product is released, the old test cases are run against the new version to make sure that all the old capabilities still work. The reason they might not work is because changing or adding new code to a program can easily introduce errors into code that is not intended to be changed.  --

    Tuesday, April 6, 2010

    STLC


    STLC: Software Test Life Cycle
    1.Test Strategies
    2.Test Plan
    3.Test Cases
    4.Execution Test Cases
    5.Bug Analysis
    6.Reporting the Bug
    7.Regression Testing