Sunday, February 28, 2010

Traceability Matrix


A method used to validate the compliance of a process or product with the requirements for that process or product.  The requirements are each listed in a row of the matrix and the columns of the matrix are used to identify how and where each requirement has been addressed.

A traceability matrix is a report from the requirements database or repository. What information the report contains depends on your need. Information requirements determine the associated information that you store with the requirements. Requirements management tools capture associated information or provide the capability to add it.

In a software development process, a traceability matrix is a table that correlates any two baselined documents that require a many to many relationship to determine the completeness of the relationship. It is often used with high-level requirements and detailed requirements of the software product to the matching parts of high-level design, detailed design, test plan, and test cases.

Common usage is to take the identifier for each of the items of one document and place them in the left column. The identifiers for the other document are placed across the top row. When an item in the left column is related to an item across the top, a mark is placed in the intersecting cell. The number of relationships are added up for each row and each column. This value indicates the mapping of the two items. Zero values indicate that no relationship exists. It must be determined if one must be made. Large values imply that the item is too complex and should be simplified.

To ease with the creation of traceability matrices, it is advisable to add the relationships to the source documents for both backward traceability and forward traceability. In other words, when an item is changed in one baselined document, it's easy to see what needs to be changed in the other.

Useful Traceability Matrices-

  • Functional specification to requirements document: It shows that each requirement (obtained from a preliminary requirements statement provided by the customer or produced in the Concept Definition stage) has been covered in an appropriate section of the functional specification.
  • Top level configuration item to functional specification: For example, a top level configuration item, Workstation, may be one of the configuration items that satisfies the function Input Order Information. On the matrix, each configuration item would be written down the left hand column and each function would be written across the top.
  • Low level configuration item to top level configuration item: For example, the top level configuration item, Workstation, may contain the low level configuration items Monitor, CPU, keyboard, and network interface card.
  • Design specification to functional specification verifies that each function has been covered in the design.
  • System test plan to functional specification ensures you have identified a test case or test scenario for each process and each requirement in the functional specification.



183 SIP MESSAGE- Session Progress Method

There are instances, most notably dealing with SIP to PSTN interwork-
   ing, that necessitate that the SIP called User Agent (UA) be able to
   suppress local alerting by the SIP calling UA and to set up a prelim-
   inary media session from the called UA to the calling UA.  This would
   allow the called UA to play back media prior to the full SIP session
   being set up.  This media would be used to report on the status of
   the session setup request.  It could also be used to play music while
   the session setup is attempted.  This would be useful for find-me
   like services that involve attempting multiple locations for a single
   setup request.

   The only method in the current SIP specification that allows the
   called UA to playback media would be to set up a full SIP session. In
   PSTN interworking situations (and likely in end-to-end SIP sessions)
   this will cause a billing relationship to be established between net-
   works for the session.  This causes a problem when the reason for
   setting up the media session is to indicate a failure in the session
   setup.

SIMULATOR VS EMULATOR

1. Simulator:
Suppose you have written assembly program in a file and corresponding exe
file is ready. The simulator is the pc software which reads the instructions
from the exe and 'minmics' the operation of the processor.
2. Emulator:
Emulator is a (PC software + a processor). The Processor can be plugged into
the TARGET BOARD when you want to test the developed software in real time
to check run time bugs. When not in use it can be unplugged. The Processor
will have a parallel or JTAG interface with the PC for downloading the exe
file for execution.

Hence, whereas the Simulator is slow in execution, Emulator will be able to
give real time verification of the developed code.
Generally you will test your developed code on simulator first and then go
for checking on emulator.

What is Boundary value analysis and Equivalence partitioning?



Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.
Equivalence Partitioning:
In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.
In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.
E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.
Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.
So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.
Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.
2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.
3) Input data with any value greater than 1000 to represent third invalid input class.
So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.
We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.
Equivalence partitioning uses fewest test cases to cover maximum requirements.
Boundary value analysis:
It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.
Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.
Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.
2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.
3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.
Boundary value analysis is often called as a part of stress and negative testing.

Thursday, February 18, 2010

WHAT IS BREW?


BREW (Binary Runtime Environment for Wireless) is Qualcomm's open source application development platform for wireless devices equipped for code division multiple access (CDMA) technology. BREW makes it possible for developers to create portable applications that will work on any handsets equipped with CDMA chipsets. Because BREW runs in between the application and the chip operating system software, the application can use the device's functionality without the developer needing to code to the system interface or even having to understand wireless applications. Users can download applications - such as text chat, enhanced e-mail, location positioning, games (both online and offline), and Internet radio - from carrier networks to any BREW-enabled phone.
BREW is competing for wireless software market share with J2ME(Java 2 Micro Edition), a similar platform from Sun Microsystems. The initial version of BREW is solely for CDMA networks; later versions could be enabled for time division multiple access (TDMA) and Global System for Mobile Communication (GSM) networks

Wednesday, February 17, 2010

Wednesday, February 10, 2010

SIP Communication Example


SIP Communication Example

In this section a call will be analyzed in detail. In a SIP call there are several SIP transactions. A SIP transaction consists of several requests and answers and the way to group them in the same transaction is by means of CSeq parameter.
User AProxyUser B
- The first step is the user register. The users must register themselves to be found by other users. In this case, the terminals send a REGISTER request, where the fields "from" and "to" correspond to the registered user. The Proxy server, who acts as Register, consults if the user can be authenticated and sends an OK message if there is no problem.

-The following transaction corresponds to a session establishment. This session consists of an INVITE request of the user to the proxy. Immediately, the proxy sends a TRYING 100 to stop the broadcastings and reroute the request to the B user. The B user sends a Ringing 180 when the telephone begins to ring and it is also reroute by the proxy to the A user. Finally, the OK 200 message corresponds to the accept process (the user B response the call).

-At this moment the call is established, and the RTP transport protocol starts with the parameters (ports, addresses, codecs, etc.) of the SDP protocol.

-The last transaction corresponds to a session end . This is carried out with an only BYE request to the Proxy, and later reroute to the B user. This user replies with an OK 200 message to confirm that the final message has been received correctly.

Very useful link-
http://www.docs.hp.com/en/5992-4080/ch01s05.html

http://toncar.cz/Tutorials/VoIP/VoIP_Protocols_SIP_Messages.html

TEST STRATEGY VS TEST PLAN

Test Strategy:
It is a company level document and developed by QA category people like QA and PM. This document defines "Testing Approach" to achieve testing objective. Test strategy is the freezed part of BRS from which we get Test Policy and Test Strategy.

Components in the Test Strategy are as follows:
1. Scope and objective
2. Business issues
3. Roles and responsibilities
4. Communication and status reporting
5. Test deliverability
6. Test approach
7. Test automation and tools
8. Testing measurements and metrices
9. Risks and mitigation
10. Defect reporting and tracking
11. Change and configuration management
12. Training plan

Test Plan:
Test plan is the freezed document developed from SRS, FS, UC. After completion of testing team formation and risk analysis, Test Lead is preparing test plan document in term of what to test, how to test, who to test, and when to test.
There is one Master Test Plan consists of reviewed Project Test Plan and Phase Test Plan. so there is general talk about Project Test Plan.
Components are as follows:

1. Test Plan id
2. Introduction
3. Test items
4. Features to be tested
5. Features not to be tested
6. Approach
7. Testing tasks
8. Suspension criteria
9. Features pass or fail criteria
10. Test environment (Entry criteria, Exit criteria)
11. Test delivarables
12. Staff and training needs
13. Responsibilities
14. Schedule
15. Risk and mitigation
16. Approach

This is one of the standard approach to prepare test plan document, but things can vary company-to-company.



A Test Plan describes the approach, Features to be tested, Testers assigned, and whatever you plan for your project. A Test Plan is usually prepared by Manager or Team Lead. That is true but not exclusively. It depends on what the test plan is intended for. Some companies have defined a test plan as being what most would consider a test case. Meaning that it is for one part of the functionality validation.

A strategy is how you are going to address testing for the project. Some companies have a strategy or approach section in the test plan, others have a seperate document.

In some companies a general strategy document is created. This outlines what is expected/advised (depending the companies policy) to do. The test plan documents at this moment are based on the organizational strategy and as such are more light weight. 

Monday, February 1, 2010

puts() Vs fputs()


Write string to stdout puts():
Writes the C string pointed by str to stdout and appends a newline character ('\n').
The function begins copying from the address specified (str) until it reaches the terminating null character ('\0'). This final null-character is not copied to stdout.

Using fputs(str,stdout) instead, performs the same operation as puts(str) but without appending the newline character at the end.

/* puts example : hello world! */
#include 

int main ()
{
  char string [] = "Hello world!";
  puts (string);
}

Return Value

On success, a non-negative value is returned.
On error, the function returns EOF.


Write string to stream fputs():
Writes the string pointed by str to the stream.
The function begins copying from the address specified (str) until it reaches the terminating null character ('\0'). This final null-character is not copied to the stream.
/* fputs example */
#include < stdio.h >

int main ()
{
   FILE * pFile;
   char sentence [256];

   printf ("Enter sentence to append: ");
   fgets (sentence,255,stdin);
   pFile = fopen ("mylog.txt","a");
   fputs (sentence,pFile);
   fclose (pFile);
   return 0;
}

Return Value

On success, a non-negative value is returned. On error, the function returns EOF.


gets() Vs fgets()


Get string from stdin
Reads characters from stdin and stores them as a string into str until a newline character ('\n') or the End-of-File is reached.
The ending newline character ('\n') is not included in the string.
A null character ('\0') is automatically appended after the last character copied to str to signal the end of the C string.
Notice that gets does not behave exactly as fgets does with stdin as argument: First, the ending newline character is not included with gets while with fgets it is. And second, gets does not let you specify a limit on how many characters are to be read, so you must be careful with the size of the array pointed by str to avoid buffer overflows.

Return Value

On success, the function returns the same str parameter.
If the End-of-File is encountered and no characters have been read, the contents of str remain unchanged and a null pointer is returnes.
If an error occurs, a null pointer is returned.
Use either ferror or feof to check whether an error happened or the End-of-File was reached.

/* gets example */
#include < stdio.h >

int main()
{
  char string [256];
  printf ("Insert your full address: ");
  gets (string);
  printf ("Your address is: %s\n",string);
  return 0;
}

fgets():
Reads characters from stream and stores them as a C string into str until (num-1) characters have been read or either a newline or a the End-of-File is reached, whichever comes first. A newline character makes fgets stop reading, but it is considered a valid character and therefore it is included in the string copied to str. A null character is automatically appended in str after the characters read to signal the end of the C string.

Return Value

On success, a non-negative value is returned. On error, the function returns EOF.
/* fgets exmaple */
#include < stdio.h >

int main()
{
   FILE * pFile;
   char mystring [100];

   pFile = fopen ("myfile.txt" , "r");
   if (pFile == NULL) perror ("Error opening file");
   else {
     fgets (mystring , 100 , pFile);
     puts (mystring);
     fclose (pFile);
   }
   return 0;
}

EXCEPTION HANDLING C++

Exceptions provide a way to react to exceptional circumstances (like runtime errors) in our program by transferring control to special functions called handlers.

To catch exceptions we must place a portion of code under exception inspection. This is done by enclosing that portion of code in a try block. When an exceptional circumstance arises within that block, an exception is thrown that transfers the control to the exception handler. If no exception is thrown, the code continues normally and all handlers are ignored.





NOTE: While handling derived class exception its required to first catch the derived class object before the base class otherwise the code under the derived class catch will never be executed.


Example:


#include < iostream >
using namespace std;


class B{
};


class D: public B {
};


int main()
{
 D derived;
 try {
 throw derived;
 }
 catch ( B b) {
cout<< "caught a base exception";
}
catch (D d) {
cout << "caught a derived exception";
}
return 0;
}


Here the catch(D d) will never be executed as the exception will flow to the catch(B b).


A exception is thrown by using the throw keyword from inside the try block. Exception handlers are declared with the keyword catch, which must be placed immediately after the try block:



Example



// exceptions
#include < iostream >
using namespace std;

int main () {
  try
  {
    throw 20;
  }
  catch (int e)
  {
    cout << "An exception occurred. Exception Nr. " << e << endl;
  }
  return 0;
}


For example, if we use the operator new and the memory cannot be allocated, an exception of type bad_alloc is thrown:



try
{
  int * myarray= new int[1000];
}
catch (bad_alloc&)
{
  cout << "Error allocating memory." << endl;
}


Because bad_alloc is derived from the standard base class exception, we can handle that same exception by catching references to the exception class:



// bad_alloc standard exception
#include < iostream >
#include < exception >
using namespace std;

int main () {
  try
  {
    int* myarray= new int[1000];
  }
  catch (exception& e)
  {
    cout << "Standard exception: " << e.what() << endl;
  }
  return 0;
}



In real world scenario we mostly have an exception class, which tells what the exception is.

Example:

#include < iostream >
#include < cstring >
using namespace std;

class MyException{

public:
char str_what[80];
int what;
MyException{
*str_what = 0;
what=0;
}
MyException(char *s, int i){
strcpy(str_what, s);
what = i;
}
};
int main()
{
int i;
try {
cout << "Enter a positive number:  ";
cin >> i;
if(i<0)
throw MyException("Not Positive",i);
}
catch (MyException e){
cout<< e.str_what << " : ";
cout<<  e.what << "\n";
}
return 0;
}
If Enter a positive number: -4
Not Positive: -4