Saturday, December 4, 2010

Rework Effort Ratio

Rework Effort Ratio
(Actual rework effort spent in that particular phase / Total actual efforts spent in that phase) * 100

Requirements Stability Index and Requirements Creep

Requirements Stability Index:
{1 - (Total number of changes /number of initial requirements)}

the GOAL should be to have the Index value as 1 ideally.

Requirements Creep:
(Total Number of requirements added/Number of initial requirements) * 100

Ideally, the Requirements Creep should be Zero.

Effort Variance, Schedule Variance, Schedule Slippage

Effort Variance:
Effort Variance can be calculated as {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100.

Schedule Variance:
Just like above formula it is similarly calculated as.
{(Actual Duration - Estimated Duration)/Estimated Duration} *100

Schedule Slippage:
When a task has been delayed from its original baseline schedule then the amount of time that it has taken is defined as schedule slippage. Its calculation is as simple as.
(Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100

The GOAL should be to have all of them as ZERO.

DEFECT DENSITY

Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.

The 'defects' are:
  • confirmed and agreed upon (not just reported).
  • Dropped defects are not counted.

The ‘period’ might be for one of the following:
  • for a duration (say, the first month, the quarter, or the year).
  • for each phase of the software life cycle.
  • for the whole of the software life cycle.
The ‘size’ is measured in one of the following:
  • Function Points (FP)
  • Source Lines of Code
DEFECT DENSITY = NUMBER of DEFECTS
                                    ---------------------------
                                          SIZE

Defect Density Uses

  • For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them
  • For comparing software/products so that ‘quality’ of each software/product can be quantified and resources focused towards those with low quality.

QA EFFECTIVENESS OR TEST EFFECTIVENESS

QA EFFECTIVENESS OR TEST EFFECTIVENESS =

Total number of internal defects X 100
-----------------------------------------------
Total number of defects( i.e. Internal + External)

4 = Internal Defects raised by QA
6= External defects used by User or customer

QA effectiveness = 4/(4+6) x 100 = 40%

Goal should be to have QA effectiveness as 100%.

DEFECT REMOVAL EFFICIENCY(DRE)

DRE is a measure to detect defects before delivery. It is calculated as a percentage of the defects identified and corrected internally with respect to the total defects found in the complete project life cycle. Thus, DRE is the percentage of bugs eliminated by reviews, inspections, tests etc.

Our prime Goal: D.R.E. = 1.0 (100%)

For e.g.
(E1)Errors Found Before Delivery / Within Activity = 5
(E2) Errors Found After Delivery / After Activity* = 4

DRE = 5/(4+5) x100= 55% 

IS 100% TEST CASE COVERAGE ENOUGH??

The term Test Case Coverage means a measure of how many of the product requirements are being tested by the defined test cases. It is the testers’ job to define their test cases based on the requirements and the goal should be 100% coverage or close to that. Please bear in mind that 100% coverage is not the same as exhaustive testing. Exhaustive testing means testing every path through the software using all possible values. So, its very clear that even though the coverage is 100% i.e. all the requirements are covered yet there will be several other test cases that needed to be tested.

REVIEW EFFECTIVENESS

Review Effectiveness is a bit harder to define it. But let me try and explain what I discovered from reading various articles over the internet. 

When you hold a review, say for the SRS, then you would have a total number of defects discovered and you also know the number of pages in the document. So, to first order, you could divide the total number of defects by the number of pages and derive a defect density measure for the document. But that number by itself doesn’t tell you a whole lot. You need to have a good understanding of the expected number of defects per page coming out of a review. That number would come from your historical data. But this number still doesn’t yield an effectiveness measure. You could be finding lots of minor defects in a review and the major ones are slipping through. So you have to look for defects that were missed by the review in downstream activities such as testing and other reviews.

So, basically its really important to know two things- 1. Review Defects and 2. Testing defects. 
Hence, Review Effectiveness will be =

Total Number of Review Defects   X 100
-----------------------------------------------------------------------------
Total Defects (which includes Review defects + Testing Defects)


For e.g. there are 4 Review Defects and 5 Testing Defects.
Hence Review Effectiveness = 4/(4+5) x 100= 44%

METRICS