Rework Effort Ratio
(Actual rework effort spent in that particular phase / Total actual efforts spent in that phase) * 100
Saturday, December 4, 2010
Requirements Stability Index and Requirements Creep
Requirements Stability Index:
{1 - (Total number of changes /number of initial requirements)}
the GOAL should be to have the Index value as 1 ideally.
Requirements Creep:
(Total Number of requirements added/Number of initial requirements) * 100
Ideally, the Requirements Creep should be Zero.
{1 - (Total number of changes /number of initial requirements)}
the GOAL should be to have the Index value as 1 ideally.
Requirements Creep:
(Total Number of requirements added/Number of initial requirements) * 100
Ideally, the Requirements Creep should be Zero.
Effort Variance, Schedule Variance, Schedule Slippage
Effort Variance:
Effort Variance can be calculated as {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100.
Schedule Variance:
Just like above formula it is similarly calculated as.
{(Actual Duration - Estimated Duration)/Estimated Duration} *100
Schedule Slippage:
When a task has been delayed from its original baseline schedule then the amount of time that it has taken is defined as schedule slippage. Its calculation is as simple as.
(Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100
The GOAL should be to have all of them as ZERO.
Effort Variance can be calculated as {(Actual Efforts-Estimated Efforts) / Estimated Efforts} *100.
Schedule Variance:
Just like above formula it is similarly calculated as.
{(Actual Duration - Estimated Duration)/Estimated Duration} *100
Schedule Slippage:
When a task has been delayed from its original baseline schedule then the amount of time that it has taken is defined as schedule slippage. Its calculation is as simple as.
(Actual End date - Estimated End date) / (Planned End Date – Planned Start Date) * 100
The GOAL should be to have all of them as ZERO.
Labels:
Effort Variance,
METRICS,
Schedule Slippage,
Schedule Variance
DEFECT DENSITY
Defect Density is the number of confirmed defects detected in software/component during a defined period of development/operation divided by the size of the software/component.
The 'defects' are:
The ‘period’ might be for one of the following:
---------------------------
SIZE
The 'defects' are:
- confirmed and agreed upon (not just reported).
- Dropped defects are not counted.
The ‘period’ might be for one of the following:
- for a duration (say, the first month, the quarter, or the year).
- for each phase of the software life cycle.
- for the whole of the software life cycle.
- Function Points (FP)
- Source Lines of Code
---------------------------
SIZE
Defect Density Uses
- For comparing the relative number of defects in various software components so that high-risk components can be identified and resources focused towards them
- For comparing software/products so that ‘quality’ of each software/product can be quantified and resources focused towards those with low quality.
QA EFFECTIVENESS OR TEST EFFECTIVENESS
QA EFFECTIVENESS OR TEST EFFECTIVENESS =
Total number of internal defects X 100
-----------------------------------------------
Total number of defects( i.e. Internal + External)
4 = Internal Defects raised by QA
6= External defects used by User or customer
QA effectiveness = 4/(4+6) x 100 = 40%
Goal should be to have QA effectiveness as 100%.
Total number of internal defects X 100
-----------------------------------------------
Total number of defects( i.e. Internal + External)
4 = Internal Defects raised by QA
6= External defects used by User or customer
QA effectiveness = 4/(4+6) x 100 = 40%
Goal should be to have QA effectiveness as 100%.
DEFECT REMOVAL EFFICIENCY(DRE)
DRE is a measure to detect defects before delivery. It is calculated as a percentage of the defects identified and corrected internally with respect to the total defects found in the complete project life cycle. Thus, DRE is the percentage of bugs eliminated by reviews, inspections, tests etc.
Our prime Goal: D.R.E. = 1.0 (100%)
For e.g.
(E1)Errors Found Before Delivery / Within Activity = 5
(E2) Errors Found After Delivery / After Activity* = 4
DRE = 5/(4+5) x100= 55%
Our prime Goal: D.R.E. = 1.0 (100%)
For e.g.
(E1)Errors Found Before Delivery / Within Activity = 5
(E2) Errors Found After Delivery / After Activity* = 4
DRE = 5/(4+5) x100= 55%
IS 100% TEST CASE COVERAGE ENOUGH??
The term Test Case Coverage means a measure of how many of the product requirements are being tested by the defined test cases. It is the testers’ job to define their test cases based on the requirements and the goal should be 100% coverage or close to that. Please bear in mind that 100% coverage is not the same as exhaustive testing. Exhaustive testing means testing every path through the software using all possible values. So, its very clear that even though the coverage is 100% i.e. all the requirements are covered yet there will be several other test cases that needed to be tested.
REVIEW EFFECTIVENESS
Review Effectiveness is a bit harder to define it. But let me try and explain what I discovered from reading various articles over the internet.
When you hold a review, say for the SRS, then you would have a total number of defects discovered and you also know the number of pages in the document. So, to first order, you could divide the total number of defects by the number of pages and derive a defect density measure for the document. But that number by itself doesn’t tell you a whole lot. You need to have a good understanding of the expected number of defects per page coming out of a review. That number would come from your historical data. But this number still doesn’t yield an effectiveness measure. You could be finding lots of minor defects in a review and the major ones are slipping through. So you have to look for defects that were missed by the review in downstream activities such as testing and other reviews.
So, basically its really important to know two things- 1. Review Defects and 2. Testing defects.
Hence, Review Effectiveness will be =
Total Number of Review Defects X 100
-----------------------------------------------------------------------------
Total Defects (which includes Review defects + Testing Defects)
For e.g. there are 4 Review Defects and 5 Testing Defects.
Hence Review Effectiveness = 4/(4+5) x 100= 44%
When you hold a review, say for the SRS, then you would have a total number of defects discovered and you also know the number of pages in the document. So, to first order, you could divide the total number of defects by the number of pages and derive a defect density measure for the document. But that number by itself doesn’t tell you a whole lot. You need to have a good understanding of the expected number of defects per page coming out of a review. That number would come from your historical data. But this number still doesn’t yield an effectiveness measure. You could be finding lots of minor defects in a review and the major ones are slipping through. So you have to look for defects that were missed by the review in downstream activities such as testing and other reviews.
So, basically its really important to know two things- 1. Review Defects and 2. Testing defects.
Hence, Review Effectiveness will be =
Total Number of Review Defects X 100
-----------------------------------------------------------------------------
Total Defects (which includes Review defects + Testing Defects)
For e.g. there are 4 Review Defects and 5 Testing Defects.
Hence Review Effectiveness = 4/(4+5) x 100= 44%
Saturday, August 28, 2010
ARTICLE ON IPHONE APPLICATIONS
An interesting article about the IPHONE APPLICATIONS(APPS.). Do read it here-
http://paragonist.blogspot.com/2010/08/no-halts-pleaseiphone-apps.html
http://paragonist.blogspot.com/2010/08/no-halts-pleaseiphone-apps.html
Wednesday, July 7, 2010
Thursday, June 17, 2010
CRONJOB-CRONTAB
Setting up cron jobs in Unix and Solaris
cron is a unix, solaris utility that allows tasks to be automatically run in the background at regular intervals by the cron daemon. These tasks are often termed as cron jobs in unix , solaris. Crontab (CRON TABle) is a file which contains the schedule of cron entries to be run and at specified times.
1. Crontab Restrictions
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does not exist, you can use
crontab if your name does not appear in the file /usr/lib/cron/cron.deny.
If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root user can use crontab. The allow/deny files consist of one user name per line.
You can execute crontab if your name appears in the file /usr/lib/cron/cron.allow. If that file does not exist, you can use
crontab if your name does not appear in the file /usr/lib/cron/cron.deny.
If only cron.deny exists and is empty, all users can use crontab. If neither file exists, only the root user can use crontab. The allow/deny files consist of one user name per line.
2. Crontab Commands
export EDITOR=vi ;to specify a editor to open crontab file.
crontab -e Edit your crontab file, or create one if it doesn’t already exist.
crontab -l Display your crontab file.
crontab -r Remove your crontab file.
crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.)
crontab -l Display your crontab file.
crontab -r Remove your crontab file.
crontab -v Display the last time you edited your crontab file. (This option is only available on a few systems.)
3. Crontab file
Crontab syntax :
A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval.
Crontab syntax :
A crontab file has five fields for specifying day , date and time followed by the command to be run at that interval.
* * * * * command to be executed - - - - - | | | | | | | | | +----- day of week (0 - 6) (Sunday=0) | | | +------- month (1 - 12) | | +--------- day of month (1 - 31) | +----------- hour (0 - 23) +------------- min (0 - 59) |
* in the value field above means all legal values as in braces for that column.
The value column can have a * or a list of elements separated by commas. An element is either a number in the ranges shown above or two numbers in the range separated by a hyphen (meaning an inclusive range).
Notes
A. ) Repeat pattern like /2 for every 2 minutes or /10 for every 10 minutes is not supported by all operating systems. If you try to use it and crontab complains it is probably not supported.
The value column can have a * or a list of elements separated by commas. An element is either a number in the ranges shown above or two numbers in the range separated by a hyphen (meaning an inclusive range).
Notes
A. ) Repeat pattern like /2 for every 2 minutes or /10 for every 10 minutes is not supported by all operating systems. If you try to use it and crontab complains it is probably not supported.
B.) The specification of days can be made in two fields: month day and weekday. If both are specified in an entry, they are cumulative meaning both of the entries will get executed .
4. Crontab Example
A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at 6:30 PM.
A line in crontab file like below removes the tmp files from /home/someuser/tmp each day at 6:30 PM.
30 18 * * * rm /home/someuser/tmp/*
5. Crontab Environment
cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
cron invokes the command from the user’s HOME directory with the shell, (/usr/bin/sh).
cron supplies a default environment for every shell, defining:
HOME=user’s-home-directory
LOGNAME=user’s-login-id
PATH=/usr/bin:/usr/sbin:.
SHELL=/usr/bin/sh
Users who desire to have their .profile executed must explicitly do so in the crontab entry or in a script called by the entry.
6. Disable Email
By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .
By default cron jobs sends a email to the user account executing the cronjob. If this is not needed put the following command At the end of the cron job line .
>/dev/null 2>&1
7. Generate log file
To collect the cron execution execution log in a file :
To collect the cron execution execution log in a file :
30 18 * * * rm /home/someuser/tmp/* > /home/someuser/cronlogs/clean_tmp_dir.log
reference: adminschoice.com
fork()
fork()
int fork() turns a single process into 2 identical processes, known as the parent and the child. On success, fork() returns 0 to the child process and returns the process ID of the child process to the parent process. On failure, fork() returns -1 to the parent process, sets errno to indicate the error, and no child process is created.
NOTE: The child process will have its own unique PID.
NOTE: The processes have unique ID's which will be different at each run.
It also impossible to tell in advance which process will get to CPU's time -- so one run may differ from the next.
When we spawn 2 processes we can easily detect (in each process) whether it is the child or parent since fork returns 0 to the child. We can trap any errors if fork returns a -1. i.e.:
int pid; /* process identifier */ pid = fork(); if ( pid < 0 ) { printf(``Cannot fork!!n''); exit(1); } if ( pid == 0 ) { /* Child process */ ...... } else { /* Parent process pid is child's pid */ .... }
SHARED MEMORY IN UNIX
In fork() system call, we mentioned that a parent and its children have separate address spaces. While this would provide a more secured way of executing parent and children processes (because they will not interfere each other), they shared nothing and have no way to communicate with each other. A shared memory is an extra piece of memory that is attached to some address spaces for their owners to use. As a result, all of these processes share the same memory segment and have access to it.
Shared memory is a feature supported by UNIX System V, including Linux, SunOS and Solaris. One process must explicitly ask for an area, using a key, to be shared by other processes. This process will be called the server. All other processes, the clients, that know the shared area can access it. However, there is no protection to a shared memory and any process that knows it can access it freely. To protect a shared memory from being accessed at the same time by several processes, a synchronization protocol must be setup.
Shared memory is a feature supported by UNIX System V, including Linux, SunOS and Solaris. One process must explicitly ask for an area, using a key, to be shared by other processes. This process will be called the server. All other processes, the clients, that know the shared area can access it. However, there is no protection to a shared memory and any process that knows it can access it freely. To protect a shared memory from being accessed at the same time by several processes, a synchronization protocol must be setup.
- For a server, it should be started before any client. The server should perform the following tasks:
- Ask for a shared memory with a memory key and memorize the returned shared memory ID. This is performed by system call shmget().
- Attach this shared memory to the server's address space with system call shmat().
- Initialize the shared memory, if necessary.
- Do something and wait for all clients' completion.
- Detach the shared memory with system call shmdt().
- Controlling the shared memory using shmctl(), read below for complete description.
- For the client part, the procedure is almost the same:
- Ask for a shared memory with the same memory key and memorize the returned shared memory ID.
- Attach this shared memory to the client's address space.
- Use the memory.
- Detach all shared memory segments, if necessary.
- Exit.
shmctl() is used to alter the permissions and other characteristics of a shared memory segment. It is prototyped as follows:
int shmctl(int shmid, int cmd, struct shmid_ds *buf);
The process must have an effective shmid of owner, creator or superuser to perform this command. The cmd argument is one of following control commands:
- SHM_LOCK
- -- Lock the specified shared memory segment in memory. The process must have the effective ID of superuser to perform this command.
- SHM_UNLOCK
- -- Unlock the shared memory segment. The process must have the effective ID of superuser to perform this command.
- IPC_STAT
- -- Return the status information contained in the control structure and place it in the buffer pointed to by buf. The process must have read permission on the segment to perform this command.
- IPC_SET
- -- Set the effective user and group identification and access permissions. The process must have an effective ID of owner, creator or superuser to perform this command.
- IPC_RMID
- -- Remove the shared memory segment.
Friday, June 11, 2010
Script make backup of all file names
#!/bin/bash FILES="$@" for f in $FILES do # if .bak backup file exists, read next file if [ -f ${f}.bak ] then echo "Skiping $f file..." continue # read next file and skip cp command fi # we are hear means no backup file exists, just use cp command to copy file /bin/cp $f $f.bak done
How to add two floating point numbers ?
You can make you og 'bc' command to do the arithmatic operations in shell script.
Following script illustrates a sample implimentation
a 1.2
b 3.1
x=(echo $a +$b|bc)
echo $x
Following script illustrates a sample implimentation
a 1.2
b 3.1
x=(echo $a +$b|bc)
echo $x
How would you get the character positions 10-20 from a text file?
cat filename.txt | cut -c 10-20
:)
If you have a string "one two three", Which shell command would you extract the strings?
Input="one two three"
for var in $Input
do
i=0;
var.$i=$var
i=$i+1
done
echo $i
for ( i=0; i< $i; i++)
do
echo `var.$i`
done
Its a little big solution but thats just a way of playing with script ;)
:)
If you have a string "one two three", Which shell command would you extract the strings?
Input="one two three"
for var in $Input
do
i=0;
var.$i=$var
i=$i+1
done
echo $i
for ( i=0; i< $i; i++)
do
echo `var.$i`
done
Its a little big solution but thats just a way of playing with script ;)
How to: UNIX List just directories or directory names
Display or list all directories
Type the following command:
$ ls -l | grep `^d'
You have current dir containing set of directories which contain files.
One file can reside in many directories.
Write script which returns number of unique file names in
all the subdirectories of a current dir.
ls -R | grep -v "^d" | sort -u
Display or list only files
Type the following command:
$ ls -l | grep -v `^d'
Tuesday, June 8, 2010
IF IN SHELL SCRIPT
$ cat > elf |
Try above script as follows:
$ chmod 755 elf
$ ./elf 1
$ ./elf -2
$ ./elf 0
$ ./elf aHere o/p for last sample run:
./elf: [: -gt: unary operator expected
./elf: [: -lt: unary operator expected
./elf: [: -eq: unary operator expected
Opps! a is not number, give number
Above program gives error for last run, here integer comparison is expected therefore error like "./elf: [: -gt: unary operator expected" occurs, but still our program notify this error to user by providing message "Opps! a is not number, give number".
SHELL COMPARISONS, CONDITIONS, CHECKS
For Mathematics, use following operator in Shell Script
Mathematical Operator in Shell Script | Meaning | Normal Arithmetical/ Mathematical Statements | But in Shell | |
For test statement with if command | For [ expr ] statement with if command | |||
-eq | is equal to | 5 == 6 | if test 5 -eq 6 | if [ 5 -eq 6 ] |
-ne | is not equal to | 5 != 6 | if test 5 -ne 6 | if [ 5 -ne 6 ] |
-lt | is less than | 5 < 6 | if test 5 -lt 6 | if [ 5 -lt 6 ] |
-le | is less than or equal to | 5 <= 6 | if test 5 -le 6 | if [ 5 -le 6 ] |
-gt | is greater than | 5 > 6 | if test 5 -gt 6 | if [ 5 -gt 6 ] |
-ge | is greater than or equal to | 5 >= 6 | if test 5 -ge 6 | if [ 5 -ge 6 ] |
NOTE: == is equal, != is not equal.
For string Comparisons use
Operator | Meaning |
string1 = string2 | string1 is equal to string2 |
string1 != string2 | string1 is NOT equal to string2 |
string1 | string1 is NOT NULL or not defined |
-n string1 | string1 is NOT NULL and does exist |
-z string1 | string1 is NULL and does exist |
Shell also test for file and directory types
Test | Meaning |
-s file | Non empty file |
-f file | Is File exist or normal file and not a directory |
-d dir | Is Directory exist and not a file |
-w file | Is writeable file |
-r file | Is read-only file |
-x file | Is file is executable |
Logical Operators
Logical operators are used to combine two or more condition at a timeOperator | Meaning |
! expression | Logical NOT |
expression1 -a expression2 | Logical AND |
expression1 -o expression2 | Logical OR |
FEW LINUX COMMANDS RELATED TO PROCESS
For this purpose | Use this Command | Examples* |
To see currently running process | ps | $ ps |
To stop any process by PID i.e. to kill process | kill {PID} | $ kill 1012 |
To stop processes by name i.e. to kill process | killall {Process-name} | $ killall httpd |
To get information about all running process | ps -ag | $ ps -ag |
To stop all process except your shell | kill 0 | $ kill 0 |
For background processing (With &, use to put particular command and program in background) | linux-command & | $ ls / -R | wc -l & |
To display the owner of the processes along with the processes | ps aux | $ ps aux |
To see if a particular process is running or not. For this purpose you have to use ps command in combination with the grep command | ps ax | grep process-U-want-to see | For e.g. you want to see whether Apache web server process is running or not then give command$ ps ax | grep httpd |
To see currently running processes and other information like memory and CPU usage with real time updates. | top See the output of top command. | $ top Note that to exit from top command press q. |
To display a tree of processes | pstree | $ pstree |
* To run some of this command you need to be root or equivalnt user.
HOW TO READ LINES OF DATA IN A SHELL SCRIPT?
There's a very easy way to solve this:
while read myline
do
echo $myline
done < inputfile
do
echo $myline
done < inputfile
If the fields in a given line are separated by a known delimiter, either a tab or a comma, for example, then I suggest that you could use the cut command to extract specific values.
To demonstrate, let's pull some useful data out of the /etc/passwd file, a file that has lines of data in known fields, separated with a ":" as the deilmiter. Here's a typical line of data-
unknown:*:99:99:Unknown User:/var/empty:/usr/bin/false
Let's just pull out login and full name to see what that looks like:
#!/bin/sh
while read inputline
do
login=`(echo $inputline | cut -d: -f1)`
fulln=`(echo $inputline | cut -d: -f5)`
echo login = $login and fullname = $fulln
done < /etc/passwd
exit 0
You can see how the cut program makes this a straightforward task, albeit one that can be done more quickly in other scripting languages like Perl. But if you want to work with shell scripts, the combination of a while read loop with the input redirected and the great cutcommand should give you all the data parsing capabilities you need.
while read inputline
do
login=`(echo $inputline | cut -d: -f1)`
fulln=`(echo $inputline | cut -d: -f5)`
echo login = $login and fullname = $fulln
done < /etc/passwd
exit 0
Shell Built in Variables
Shell Built in Variables | Meaning |
$# | Number of command line arguments. Useful to test no. of command line args in shell script. |
$* | All arguments to shell |
$@ | Same as above |
$- | Option supplied to shell |
$$ | PID of shell |
$! | PID of last started background process (started with &) |
(2) If return value is nonzero, command is not successful or some sort of error executing command/shell script.
This value is know as Exit Status.
Monday, June 7, 2010
SQL JOINS
SQL joins are used to query data from two or more tables, based on a relationship between certain columns in these tables.
Different SQL JOINs
- JOIN(inner): Return rows when there is at least one match in both tables
- LEFT JOIN: Return all rows from the left table, even if there are no matches in the right table
- RIGHT JOIN: Return all rows from the right table, even if there are no matches in the left table
- FULL JOIN: Return rows when there is a match in one of the tables
SQL INNER JOIN Syntax
SELECT column_name(s) FROM table_name1 INNER JOIN table_name2 ON table_name1.column_name=table_name2.column_name |
PS: INNER JOIN is the same as JOIN.
The INNER JOIN keyword return rows when there is at least one match in both tables. If there are rows in "Persons" that do not have matches in "Orders", those rows will NOT be listed.
SQL LEFT JOIN Keyword
The LEFT JOIN keyword returns all rows from the left table (table_name1), even if there are no matches in the right table (table_name2).
SQL LEFT JOIN Syntax
SELECT column_name(s) FROM table_name1 LEFT JOIN table_name2 ON table_name1.column_name=table_name2.column_name |
PS: In some databases LEFT JOIN is called LEFT OUTER JOIN.
SQL RIGHT JOIN Keyword
The RIGHT JOIN keyword Return all rows from the right table (table_name2), even if there are no matches in the left table (table_name1).
SQL RIGHT JOIN Syntax
SELECT column_name(s) FROM table_name1 RIGHT JOIN table_name2 ON table_name1.column_name=table_name2.column_name |
PS: In some databases RIGHT JOIN is called RIGHT OUTER JOIN.
SQL FULL JOIN Keyword
The FULL JOIN keyword return rows when there is a match in one of the tables.
SQL FULL JOIN Syntax
SELECT column_name(s) FROM table_name1 FULL JOIN table_name2 ON table_name1.column_name=table_name2.column_name |
The FULL JOIN keyword returns all the rows from the left table (Persons), and all the rows from the right table (Orders). If there are rows in "Persons" that do not have matches in "Orders", or if there are rows in "Orders" that do not have matches in "Persons", those rows will be listed as well.
Friday, June 4, 2010
DDL, DML & DCL
What are the difference between DDL, DML and DCL commands?
DDL
Data Definition Language (DDL) statements are used to define the database structure or schema. Some examples:- CREATE - to create objects in the database
- ALTER - alters the structure of the database
- DROP - delete objects from the database
- TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed
- COMMENT - add comments to the data dictionary
- RENAME - rename an object
DML
Data Manipulation Language (DML) statements are used for managing data within schema objects. Some examples:- SELECT - retrieve data from the a database
- INSERT - insert data into a table
- UPDATE - updates existing data within a table
- DELETE - deletes all records from a table, the space for the records remain
- MERGE - UPSERT operation (insert or update)
- CALL - call a PL/SQL or Java subprogram
- EXPLAIN PLAN - explain access path to data
- LOCK TABLE - control concurrency
DCL
Data Control Language (DCL) statements. Some examples:- GRANT - gives user's access privileges to database
- REVOKE - withdraw access privileges given with the GRANT command
TCL
Transaction Control (TCL) statements are used to manage the changes made by DML statements. It allows statements to be grouped together into logical transactions.- COMMIT - save work done
- SAVEPOINT - identify a point in a transaction to which you can later roll back
- ROLLBACK - restore database to original since the last COMMIT
- SET TRANSACTION - Change transaction options like isolation level and what rollback segment to use
ENTRY & EXIT CRITERIA FOR TESTING
As the name implies Entry criteria denotes the conditions or process that must be present before a process can begin. For example Test Plan, Test Strategy, Test tools are some of the Entry criteria for carrying out the testing process. Exit criteria as the name implies denotes the conditions or process that must be present before a cycle completes. Test results or Test Summary Report, Test Logs are some of the Exit criteria out of the testing process.
The Entrance Criteria specified by the system test controller, should be fulfilled before System Test can commence. In the event, that any criterion has not been achieved, the System Test may commence if Business Team and Test Controller are in full agreement that the risk is manageable.
* All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
* System Test plans must be signed off by Business Analyst and Test Controller.
* All human resources must be assigned and in place.
* All test hardware and environments must be in place, and free for System test use.
* The Acceptance Tests must be completed, with a pass rate of not less than 80%.
The Exit Criteria detailed below must be achieved before the Phase 1 software can be recommended for promotion to Operations Acceptance status. Furthermore, I recommend that there be a minimum 2 days effort Final Integration testing AFTER the final fix/change has been retested.
* All High Priority errors from System Test must be fixed and tested
* If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable by Business Analyst
* Project Integration Test must be signed off by Test Controller and Business Analyst.
* Business Acceptance Test must be signed off by Business Expert.
The Entrance Criteria specified by the system test controller, should be fulfilled before System Test can commence. In the event, that any criterion has not been achieved, the System Test may commence if Business Team and Test Controller are in full agreement that the risk is manageable.
* All developed code must be unit tested. Unit and Link Testing must be completed and signed off by development team.
* System Test plans must be signed off by Business Analyst and Test Controller.
* All human resources must be assigned and in place.
* All test hardware and environments must be in place, and free for System test use.
* The Acceptance Tests must be completed, with a pass rate of not less than 80%.
The Exit Criteria detailed below must be achieved before the Phase 1 software can be recommended for promotion to Operations Acceptance status. Furthermore, I recommend that there be a minimum 2 days effort Final Integration testing AFTER the final fix/change has been retested.
* All High Priority errors from System Test must be fixed and tested
* If any medium or low-priority errors are outstanding - the implementation risk must be signed off as acceptable by Business Analyst
* Project Integration Test must be signed off by Test Controller and Business Analyst.
* Business Acceptance Test must be signed off by Business Expert.
FOREIGN KEY
A foreign key is a field (or fields) that points to the primary key of another table. The purpose of the foreign key is to ensure referential integrity of the data. In other words, only values that are supposed to appear in the database are permitted.
For example, say we have two tables, a CUSTOMER table that includes all customer data, and an ORDERS table that includes all customer orders. The constraint here is that all orders must be associated with a customer that is already in the CUSTOMER table. In this case, we will place a foreign key on the ORDERS table and have it relate to the primary key of the CUSTOMER table. This way, we can ensure that all orders in the ORDERS table are related to a customer in the CUSTOMER table. In other words, the ORDERS table cannot contain information on a customer that is not in the CUSTOMER table.
The structure of these two tables will be as follows:
Table CUSTOMER
column name | characteristic |
SID | Primary Key |
Last_Name | |
First_Name |
Table ORDERS
column name | characteristic |
Order_ID | Primary Key |
Order_Date | |
Customer_SID | Foreign Key |
Amount |
In the above example, the Customer_SID column in the ORDERS table is a foreign key pointing to the SID column in the CUSTOMER table.
Below we show examples of how to specify the foreign key when creating the ORDERS table:
MySQL:
CREATE TABLE ORDERS
(Order_ID integer,
Order_Date date,
Customer_SID integer,
Amount double,
Primary Key (Order_ID),
Foreign Key (Customer_SID) references CUSTOMER(SID));
CREATE TABLE ORDERS
(Order_ID integer,
Order_Date date,
Customer_SID integer,
Amount double,
Primary Key (Order_ID),
Foreign Key (Customer_SID) references CUSTOMER(SID));
Oracle:
CREATE TABLE ORDERS
(Order_ID integer primary key,
Order_Date date,
Customer_SID integer references CUSTOMER(SID),
Amount double);
CREATE TABLE ORDERS
(Order_ID integer primary key,
Order_Date date,
Customer_SID integer references CUSTOMER(SID),
Amount double);
SQL Server:
CREATE TABLE ORDERS
(Order_ID integer primary key,
Order_Date datetime,
Customer_SID integer references CUSTOMER(SID),
Amount double);
CREATE TABLE ORDERS
(Order_ID integer primary key,
Order_Date datetime,
Customer_SID integer references CUSTOMER(SID),
Amount double);
Below are examples for specifying a foreign key by altering a table. This assumes that the ORDERS table has been created, and the foreign key has not yet been put in:
MySQL:
ALTER TABLE ORDERS
ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID);
ALTER TABLE ORDERS
ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID);
Oracle:
ALTER TABLE ORDERS
ADD (CONSTRAINT fk_orders1) FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID);
ALTER TABLE ORDERS
ADD (CONSTRAINT fk_orders1) FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID);
SQL Server:
ALTER TABLE ORDERS
ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID);
ALTER TABLE ORDERS
ADD FOREIGN KEY (customer_sid) REFERENCES CUSTOMER(SID);
Quality Center
HP Quality Center (QC) (formerly HP TestDirector for Quality Center) is a web-based test management tool by Mercury Interactive (now HP). It is based on client server technology and has five main modules/tabs: Releases, Requirements, Test Plan, Test Lab and Defects for management of testing processes. There can be additional modules as well, depending on the various add-ins installed, e.g., BPT (Business Process Testing).
Modules
Releases
This module helps to manage releases and cycles (iterations). User can plan and track application release progress using this module
Requirements
This module is used for Requirement Management and Requirements Traceability of various Test cases stored in the QC Repository
Test Plan
This plan is used for creating or updating different Test Cases. The Test Cases are contained in different folders, which are displayed in a tree-like structure. It can store both Manual and Automated test cases. Manual Test Cases can be written locally or imported from Microsoft Excel spreadsheets, with each 'Test Step' having Expected Result and ActualResult sections. QC supports automated scripts developed for different Automation Tools such as QTP, LoadRunner, and WinRunner. These scripts can be saved directly from the Tool into the Test Plan tab of QC. However, prior to this, the appropriate QC add-in needs to be installed to support an Automation Tool.
Test Lab
This tab is for execution of the test cases stored in the Test Plan module, which can be imported locally to the Test Lab screen and run. When a Manual Test case is executed, it opens up a pop-up window listing all of the Test Steps, and the user can update the status of each step with Passed, Failed or Not Complete. When an automated test case is run, QC invokes the Automation Tool which in turn executes the script, stores the result in the QC repository, and displays it on the UI.
Defects
All defects are logged in this tab. These defects can be mapped to the corresponding test cases that failed and hence to the Requirements tab. Defects can be filtered on the status of the defect or by the user.
Reports
The add-on package, QCReporting has been created which is dedicated to QC and produces media, where possible, in the following formats:
- Microsoft Excel
- Screen table
soure-wiki
Saturday, May 29, 2010
Referential CONSTRAINT
Constraints on the database that require relations to satisfy certain properties. Relations that satisfy all such constraints are legal relations.
Defining referential constraints
Referential integrity is imposed by adding referential constraints to table and column definitions. Once referential constraints are defined to the database manager, changes to the data within the tables and columns is checked against the defined constraint. Completion of the requested action depends on the result of the constraint checking.Referential constraints are established with the FOREIGN KEY clause, and the REFERENCES clause in the CREATE TABLE or ALTER TABLE statements. There are effects from a referential constraint on a typed table or to a parent table that is a typed table that you should consider before creating a referential constraint.
The identification of foreign keys enforces constraints on the values within the rows of a table or between the rows of two tables. The database manager checks the constraints specified in a table definition and maintains the relationships accordingly. The goal is to maintain integrity whenever one database object references another.
For example, primary and foreign keys each have a department number column. For the EMPLOYEE table, the column name is WORKDEPT, and for the DEPARTMENT table, the name is DEPTNO. The relationship between these two tables is defined by the following constraints:
The delete rule for the referential constraint in the employee table is NO ACTION, which means that a department cannot be deleted from the DEPARTMENT table if there are any employees in that department.
Although the previous examples use the CREATE TABLE statement to add a referential constraint, the ALTER TABLE statement can also be used.
Another example: The same table definitions are used as those in the previous example. Also, the DEPARTMENT table is created before the EMPLOYEE table. Each department has a manager, and that manager is listed in the EMPLOYEE table. MGRNO of the DEPARTMENT table is actually a foreign key of the EMPLOYEE table. Because of this referential cycle, this constraint poses a slight problem. You could add a foreign key later. You could also use the CREATE SCHEMA statement to create both the EMPLOYEE and DEPARTMENT tables at the same time.
The identification of foreign keys enforces constraints on the values within the rows of a table or between the rows of two tables. The database manager checks the constraints specified in a table definition and maintains the relationships accordingly. The goal is to maintain integrity whenever one database object references another.
For example, primary and foreign keys each have a department number column. For the EMPLOYEE table, the column name is WORKDEPT, and for the DEPARTMENT table, the name is DEPTNO. The relationship between these two tables is defined by the following constraints:
- There is only one department number for each employee in the EMPLOYEE table, and that number exists in the DEPARTMENT table.
- Each row in the EMPLOYEE table is related to no more than one row in the DEPARTMENT table. There is a unique relationship between the tables.
- Each row in the EMPLOYEE table that has a non-null value for WORKDEPT is related to a row in the DEPTNO column of the DEPARTMENT table.
- The DEPARTMENT table is the parent table, and the EMPLOYEE table is the dependent table.
CREATE TABLE DEPARTMENT (DEPTNO CHAR(3) NOT NULL, DEPTNAME VARCHAR(29) NOT NULL, MGRNO CHAR(6), ADMRDEPT CHAR(3) NOT NULL, LOCATION CHAR(16), PRIMARY KEY (DEPTNO)) IN RESOURCEThe SQL statement defining the dependent table, EMPLOYEE, is:
CREATE TABLE EMPLOYEE (EMPNO CHAR(6) NOT NULL PRIMARY KEY, FIRSTNME VARCHAR(12) NOT NULL, LASTNAME VARCHAR(15) NOT NULL, WORKDEPT CHAR(3), PHONENO CHAR(4), PHOTO BLOB(10m) NOT NULL, FOREIGN KEY DEPT (WORKDEPT) REFERENCES DEPARTMENT ON DELETE NO ACTION) IN RESOURCEBy specifying the DEPTNO column as the primary key of the DEPARTMENT table and WORKDEPT as the foreign key of the EMPLOYEE table, you are defining a referential constraint on the WORKDEPT values. This constraint enforces referential integrity between the values of the two tables. In this case, any employees that are added to the EMPLOYEE table must have a department number that can be found in the DEPARTMENT table.
The delete rule for the referential constraint in the employee table is NO ACTION, which means that a department cannot be deleted from the DEPARTMENT table if there are any employees in that department.
Although the previous examples use the CREATE TABLE statement to add a referential constraint, the ALTER TABLE statement can also be used.
Another example: The same table definitions are used as those in the previous example. Also, the DEPARTMENT table is created before the EMPLOYEE table. Each department has a manager, and that manager is listed in the EMPLOYEE table. MGRNO of the DEPARTMENT table is actually a foreign key of the EMPLOYEE table. Because of this referential cycle, this constraint poses a slight problem. You could add a foreign key later. You could also use the CREATE SCHEMA statement to create both the EMPLOYEE and DEPARTMENT tables at the same time.
Subscribe to:
Posts (Atom)