Sunday, January 31, 2010

PRIORITY VS SEVERITY

“Priority” is associated with scheduling, and “severity” is associated with standards and criticality.
“Priority” means something is afforded or deserves prior attention; a precedence
established by order of importance (or urgency). “Severity” is the state or quality of
being severe; severe implies adherence to rigorous standards or high principles and
often suggests harshness; severe is marked by or requires strict adherence to
rigorous standards or high principles, e.g. a severe code of behavior. 

Exploratory Testing & Adhoc Testing

Exploratory testing is an approach to software testing that is concisely described as simultaneous learning, test design and test execution. Exploratory testing seeks to find out how the software actually works, and to ask questions about how it will handle difficult and easy cases. The quality of the testing is dependent on the tester's skill of inventing test cases and finding defects. The more the tester knows about the product and different test methods, the better the testing will be.



Ad hoc testing is a commonly used term for software testing performed without planning and documentation.

The tests are intended to be run only once, unless a defect is discovered. Ad hoc testing is a part of exploratory testing, being the least formal of test methods. In this view, ad hoc testing has been criticized because it isn't structured, but this can also be a strength: important things can be found quickly. It is performed with improvisation, the tester seeks to find bugs with any means that seem appropriate. It contrasts to regression testing that looks for a specific issue with detailed reproduction steps, and a clear expected result. Ad hoc testing is most often used as a complement to other types of testing.

REVIEW PROCESS


Review process

The different types of reviews vary from very informal (e.g. no written instructions for reviewers) to
very formal (i.e. well structured and regulated). The formality of a review process is related to
factors such as the maturity of the development process, any legal or regulatory requirements or the
need for an audit trail.The way a review is carried out depends on the agreed objective of the review (e.g. find defects, gain understanding, or discussion and decision by consensus).

Phases of a formal review
A typical formal review has the following main phases:
1. Planning: selecting the personnel, allocating roles; defining the entry and exit criteria for more
formal review types (e.g. inspection); and selecting which parts of documents to look at.


2. Kick-off: distributing documents; explaining the objectives, process and documents to the
participants; and checking entry criteria (for more formal review types).

3. Individual preparation: work done by each of the participants on their own before the review
meeting, noting potential defects, questions and comments.

4. Review meeting: discussion or logging, with documented results or minutes (for more formal
review types). The meeting participants may simply note defects, make recommendations for
handling the defects, or make decisions about the defects.

5. Rework: fixing defects found, typically done by the author.

6. Follow-up: checking that defects have been addressed, gathering metrics and checking on exit
criteria (for more formal review types).

Roles and responsibilities:

A typical formal review will include the roles below:
  • Manager: decides on the execution of reviews, allocates time in project schedules and
    determines if the review objectives have been met.
  • Moderatorthe person who leads the review of the document or set of documents, including
    planning the review, running the meeting, and follow-up after the meeting. If necessary, the
    moderator may mediate between the various points of view and is often the person upon whom the success of the review rests.
  • Author: the writer or person with chief responsibility for the document(s) to be reviewed.
  • Reviewers: individuals with a specific technical or business background (also called checkers or inspectors) who, after the necessary preparation, identify and describe findings (e.g. defects) in the product under review. Reviewers should be chosen to represent different perspectives and roles in the review process, and should take part in any review meetings.
  • Scribe (or recorder): documents all the issues, problems and open points that were identified
    during the meeting.
Looking at documents from different perspectives and using checklists can make reviews more
effective and efficient, for example, a checklist based on perspectives such as user, maintainer,
tester or operations, or a checklist of typical requirements problems.

Types of review:
A single document may be the subject of more than one review. If more than one type of review is
used, the order may vary. For example, an informal review may be carried out before a technical
review, or an inspection may be carried out on a requirements specification before a walkthrough
with customers. The main characteristics, options and purposes of common review types are:

Informal review
Key characteristics:
  • no formal process;
  • there may be pair programming or a technical lead reviewing designs and code;
  • optionally may be documented;
  • may vary in usefulness depending on the reviewer;
  • main purpose: inexpensive way to get some benefit.
Walkthrough
Key characteristics:
  • meeting led by author;
  • scenarios, dry runs, peer group;
  • open-ended sessions;
  • optionally a pre-meeting preparation of reviewers, review report, list of findings and scribe (who is not the author);
  • may vary in practice from quite informal to very formal;
  • main purposes: learning, gaining understanding, defect finding.
Technical review

Key characteristics:
  • documented, defined defect-detection process that includes peers and technical experts;
  • may be performed as a peer review without management participation;
  • ideally led by trained moderator (not the author);
  • pre-meeting preparation;
  • optionally the use of checklists, review report, list of findings and management participation may vary in practice from quite informal to very formal;
  • main purposes: discuss, make decisions, evaluate alternatives, find defects, solve technical
    problems and check conformance to specifications and standards.
Inspection
Key characteristics:
  • led by trained moderator (not the author);
  • usually peer examination;
  • defined roles;
  • includes metrics;
  • formal process based on rules and checklists with entry and exit criteria;
  • pre-meeting preparation;
  • inspection report, list of findings;
  • formal follow-up process;
  • optionally, process improvement and reader;
  • main purpose: find defects.
Walkthroughs, technical reviews and inspections can be performed within a peer group –
colleagues at the same organizational level. This type of review is called a “peer review”.


Success factors for reviews:
Success factors for reviews include:
  • Each review has a clear predefined objective.
  • The right people for the review objectives are involved.
  • Defects found are welcomed, and expressed objectively.
  • People issues and psychological aspects are dealt with (e.g. making it a positive experience for the author).
  • Review techniques are applied that are suitable to the type and level of software work products and reviewers.
  • Checklists or roles are used if appropriate to increase effectiveness of defect identification.
  • Training is given in review techniques, especially the more formal techniques, such as
    inspection.
  • Management supports a good review process (e.g. by incorporating adequate time for review activities in project schedules).
  • There is an emphasis on learning and process improvement.

Software VERIFICATION VS VALIDATION

Verification takes place before validation, and not vice versa. Verification evaluates documents, plans, code, requirements, and specifications. Validation, on the other hand, evaluates the product itself.



Verification:  was the product built right?

Validation: was the right product built?


In simple language, we can say that Verification is the pre-testing reviews, walk throughs, evaluations of the documents, plans and code, whereas Validation is the actual testing, finding the defects in the software.


Bug Life Cycle


In SDLC the bug has a life cycle. The bug should go through the life cycle to be closed. A specific life cycle ensures that the process is standardized.The bug attains different states in the life cycle. The life cycle of the bug can be shown diagrammatically as follows:


The different states of a bug can be summarized as follows:

1. New
2. Open
3. Assign
4. Test
5. Verified
6. Deferred
7. Reopened
8. Duplicate
9. Rejected and
10. Closed

Description of Various Stages:

1. New: When the bug is posted for the first time, its state will be “NEW”. This means that the bug is not yet approved.

2. Open: After a tester has posted a bug, the lead of the tester approves that the bug is genuine and he changes the state as “OPEN”.

3. Assign: Once the lead changes the state as “OPEN”, he assigns the bug to corresponding developer or developer team. The state of the bug now is changed to “ASSIGN”.

4. Test: Once the developer fixes the bug, he has to assign the bug to the testing team for next round of testing. Before he releases the software with bug fixed, he changes the state of bug to “TEST”. It specifies that the bug has been fixed and is released to testingteam.

5. Deferred: The bug, changed to deferred state means the bug is expected to be fixed in next releases. The reasons for changing the bug to this state have many factors. Some of them are priority of the bug may be low, lack of time for the release or the bugmay not have major effect on the software.

6. Rejected: If the developer feels that the bug is not genuine, he rejects the bug. Then the state of the bug is changed to “REJECTED”.

7. Duplicate: If the bug is repeated twice or the two bugs mention the same concept ofthe bug, then one bug status is changed to “DUPLICATE”.

8. Verified: Once the bug is fixed and the status is changed to “TEST”, the tester teststhe bug. If the bug is not present in the software, he approves that the bug is fixed and changes the status to “VERIFIED”.

9. Reopened: If the bug still exists even after the bug is fixed by the developer, the tester changes the status to “REOPENED”. The bug traverses the life cycle once again.

10. Closed: Once the bug is fixed, it is tested by the tester. If the tester feels that the bug no longer exists in the software, he changes the status of the bug to “CLOSED”. This state means that the bug is fixed, tested and approved.

While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.

Performance Vs Load Vs Stress Testing


Performance vs. load vs. stress testing


Here's a good interview question for a tester: how do you define performance/load/stress testing? Many times people use these terms interchangeably, but they have in fact quite different meanings. 

Performance testing

The goal of 
performance testing is not to find bugs, but to eliminate bottlenecks and establish a baseline for future regression testing. To conduct performance testing is to engage in a carefully controlled process of measurement and analysis. Ideally, the software under test is already stable enough so that this process can proceed smoothly.

A clearly defined set of expectations is essential for meaningful 
performance testing. If you don't know where you want to go in terms of the performance of the system, then it matters little which direction you take (remember Alice and the Cheshire Cat?). For example, for a Web application, you need to know at least two things:
  • expected load in terms of concurrent users or HTTP connections
  • acceptable response time
Once you know where you want to be, you can start on your way there by constantly increasing the load on the system while looking for bottlenecks. To take again the example of a Web application, these bottlenecks can exist at multiple levels, and to pinpoint them you can use a variety of tools:
  • at the application level, developers can use profilers to spot inefficiencies in their code (for example poor search algorithms)
  • at the database level, developers and DBAs can use database-specific profilers and query optimizers
  • at the operating system level, system engineers can use utilities such as top, vmstat, iostat (on Unix-type systems) and PerfMon (on Windows) to monitor hardware resources such as CPU, memory, swap, disk I/O; specialized kernel monitoring software can also be used
  • at the network level, network engineers can use packet sniffers such as tcpdump, network protocol analyzers such as ethereal, and various utilities such as netstat, MRTG, ntop, mii-tool
From a testing point of view, the activities described above all take a white-box approach, where the system is inspected and monitored "from the inside out" and from a variety of angles. Measurements are taken and analyzed, and as a result, tuning is done.

However, testers also take a black-box approach in running the load tests against the system under test. For a Web application, testers will use tools that simulate concurrent users/HTTP connections and measure response times. Some lightweight open source tools I've used in the past for this purpose are 
absiegehttperf. A more heavyweight tool I haven't used yet is OpenSTA. I also haven't used The Grinder yet, but it is high on my TODO list.

When the results of the load test indicate that 
performance of the system does not meet its expected goals, it is time for tuning, starting with the application and the database. You want to make sure your code runs as efficiently as possible and your database is optimized on a given OS/hardware configurations. TDD practitioners will find very useful in this context a framework such as Mike Clark's jUnitPerf, which enhances existing unit test code with load test and timed test functionality. Once a particular function or method has been profiled and tuned, developers can then wrap its unit tests in jUnitPerf and ensure that it meets performancerequirements of load and timing. Mike Clark calls this "continuous performance testing". I should also mention that I've done an initial port of jUnitPerf to Python -- I called it pyUnitPerf.

If, after tuning the application and the database, the system still doesn't meet its expected goals in terms of 
performance, a wide array of tuning procedures is available at the all the levels discussed before. Here are some examples of things you can do to enhance theperformance of a Web application outside of the application code per se:
  • Use Web cache mechanisms, such as the one provided by Squid
  • Publish highly-requested Web pages statically, so that they don't hit the database
  • Scale the Web server farm horizontally via load balancing
  • Scale the database servers horizontally and split them into read/write servers and read-only servers, then load balance the read-only servers
  • Scale the Web and database servers vertically, by adding more hardware resources (CPU, RAM, disks)
  • Increase the available network bandwidth
Performance tuning can sometimes be more art than science, due to the sheer complexity of the systems involved in a modern Web application. Care must be taken to modify one variable at a time and redo the measurements, otherwise multiple changes can have subtle interactions that are hard to qualify and repeat.

In a standard test environment such as a test lab, it will not always be possible to replicate the production server configuration. In such cases, a staging environment is used which is a subset of the production environment. The expected 
performance of the system needs to be scaled down accordingly.

The cycle "run load test->measure 
performance->tune system" is repeated until the system under test achieves the expected levels ofperformance. At this point, testers have a baseline for how the system behaves under normal conditions. This baseline can then be used in regression tests to gauge how well a new version of the software performs.

Another common goal of 
performance testing is to establish benchmark numbers for the system under test. There are many industry-standard benchmarks such as the ones published by TPC, and many hardware/software vendors will fine-tune their systems in such ways as to obtain a high ranking in the TCP top-tens. It is common knowledge that one needs to be wary of any performance claims that do not include a detailed specification of all the hardware and software configurations that were used in that particular test.
Load testing
We have already seen load testing as part of the process of performance testing and tuning. In that context, it meant constantly increasing the load on the system via automated tools. For a Web application, the load is defined in terms of concurrent users or HTTP connections.

In the 
testing literature, the term "load testing" is usually defined as the process of exercising the system under test by feeding it the largest tasks it can operate with. Load testing is sometimes called volume testing, or longevity/endurance testing.

Examples of volume 
testing:
  • testing a word processor by editing a very large document
  • testing a printer by sending it a very large job
  • testing a mail server with thousands of users mailboxes
  • a specific case of volume testing is zero-volume testing, where the system is fed empty tasks
Examples of longevity/endurance testing:
  • testing a client-server application by running the client in a loop against the server over an extended period of time
Goals of load testing:
  • expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc.
  • ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.
Although performance testing and load testing can seem similar, their goals are different. On one hand, performance testing uses loadtesting techniques and tools for measurement and benchmarking purposes and uses various load levels. On the other hand, loadtesting operates at a predefined load level, usually the highest load that the system can accept while still functioning properly. Note that load testing does not aim to break the system by overwhelming it, but instead tries to keep the system constantly humming like a well-oiled machine.

In the context of load 
testing, I want to emphasize the extreme importance of having large datasets available for testing. In my experience, many important bugs simply do not surface unless you deal with very large entities such thousands of users in repositories such as LDAP/NIS/Active Directory, thousands of mail server mailboxes, multi-gigabyte tables in databases, deep file/directory hierarchies on file systems, etc. Testers obviously need automated tools to generate these large data sets, but fortunately any good scripting language worth its salt will do the job.
Stress testing

Stress 
testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this madness is to make sure that the system fails and recovers gracefully -- this quality is known as recoverability.

Where 
performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability. To take again the example of a Web application, here are some ways in which stress can be applied to the system:
  • double the baseline number for concurrent users/HTTP connections
  • randomly shut down and restart ports on the network switches/routers that connect the servers (via SNMP commands for example)
  • take the database offline, then restart it
  • rebuild a RAID array while the system is running
  • run processes that consume resources (CPU, memory, disk, network) on the Web and database servers
I'm sure devious testers can enhance this list with their favorite ways of breaking systems. However, stress testing does not break the system purely for the pleasure of breaking it, but instead it allows testers to observe how the system reacts to failure. Does it save its state or does it crash suddenly? Does it just hang and freeze or does it fail gracefully? On restart, is it able to recover from the last good state? Does it print out meaningful error messages to the user, or does it merely display incomprehensible hex codes? Is the security of the system compromised because of unexpected failures? And the list goes on.

STUB VS DRIVER

Both these terms, Stub and driver, are mainly used in Software integration testing.


->Stub is a piece of code emulating a called function, a driver is a piece of code emulating  a calling function.


-> Stubs are created integration testing like Top-down approach. 
    Drivers are created integration testing like bottom-up approach.

-> Stub: A piece of code that simulates the activity of missing component. 
    Driver: A piece of code that passes test case to another piece of code.



Example - For Unit Testing of ‘Sales Order Printing’ program, a ‘Driver’ program will have the code which will create Sales Order records using hard coded data and then call ‘Sales Order Printing’ program. Suppose this printing program uses another unit which calculates Sales discounts by some complex calculations. Then call to this unit will be replaced by a ‘Stub’, which will simply return fix discount data.



Also, check out Free ISTQB Training Material here

www.testing4success.com - Application QA/Testing     Mobile App QA/Testing     Web Testing     Training
Iphone App QA  Android App QA  Web QA





Suppose I have a workflow in which the code functionality flows from one system to the another and then to the third one, if the second system in the flow is down then we can use a simulator for that step which will act like a Driver. Similarly, if there was some xml response which we might have been receiving from the second system than that part will be called as Stub.

Saturday, January 30, 2010

THE VIRTUAL TABLE- COMPLETE CONCEPT

To implement virtual functions, C++ uses a special form of late binding known as the virtual table. The virtual table is a lookup table of functions used to resolve function calls in a dynamic/late binding manner. The virtual table sometimes goes by other names, such as “vtable”, “virtual function table”, “virtual method table”, or “dispatch table”.

Because knowing how the virtual table works is not necessary to use virtual functions, this section can be considered optional reading.

The virtual table is actually quite simple, though it’s a little complex to describe in words. First, every class that uses virtual functions (or is derived from a class that uses virtual functions) is given it’s own virtual table. This table is simply a static array that the compiler sets up at compile time. A virtual table contains one entry for each virtual function that can be called by objects of the class. Each entry in this table is simply a function pointer that points to the most-derived function accessible by that class.

Second, the compiler also adds a hidden pointer to the base class, which we will call *__vptr. *__vptr is set (automatically) when a class instance is created so that it points to the virtual table for that class. Unlike the *this pointer, which is actually a function parameter used by the compiler to resolve self-references, *__vptr is a real pointer. Consequently, it makes each class object allocated bigger by the size of one pointer. It also means that *__vptr is inherited by derived classes, which is important.

By now, you’re probably confused as to how these things all fit together, so let’s take a look at a simple example:


class Base
{
public:
virtual void function1() {};
virtual void function2() {};

};



class D1: public Base
{
public:
virtual void function1() {};
};


class D2: public Base
{
public:
virtual void function2() {};
};
Because there are 3 classes here, the compiler will set up 3 virtual tables: one for Base, one for D1, and one for D2.

The compiler also adds a hidden pointer to the most base class that uses virtual functions. Although the compiler does this automatically, we’ll put it in the next example just to show where it’s added:


class Base
{

public:

FunctionPointer *__vptr;

virtual void function1() {};

virtual void function2() {};

};


class D1: public Base
{
public:

virtual void function1() {};

};

class D2: public Base
{
public:
virtual void function2() {};
};
When a class object is created, *__vptr is set to point to the virtual table for that class. For example, when a object of type Base is created, *__vptr is set to point to the virtual table for Base. When objects of type D1 or D2 are constructed, *__vptr is set to point to the virtual table for D1 or D2 respectively.

Now, let’s talk about how these virtual tables are filled out. Because there are only two virtual functions here, each virtual table will have two entries (one for function1(), and one for function2()). Remember that when these virtual tables are filled out, each entry is filled out with the most-derived function an object of that class type can call.

Base’s virtual table is simple. An object of type Base can only access the members of Base. Base has no access to D1 or D2 functions. Consequently, the entry for function1 points to Base::function1(), and the entry for function2 points to Base::function2().

D1’s virtual table is slightly more complex. An object of type D1 can access members of both D1 and Base. However, D1 has overridden function1(), making D1::function1() more derived than Base::function1(). Consequently, the entry for function1 points to D1::function1(). D1 hasn’t overridden function2(), so the entry for function2 will point to Base::function2().

D2’s virtual table is similar to D1, except the entry for function1 points to Base::function1(), and the entry for function2 points to D2::function2().

Here’s a picture of this graphically:



Although this diagram is kind of crazy looking, it’s really quite simple: the *__vptr in each class points to the virtual table for that class. The entries in the virtual table point to the most-derived version of the function objects of that class are allowed to call.

So consider what happens when we create an object of type D1:

int main()
{
D1 cClass;
}
Because cClass is a D1 object, cClass has it’s *__vptr set to the D1 virtual table.

Now, let’s set a base pointer to D1:

int main()
{
D1 cClass;
Base *pClass = &cClass;
}
Note that because pClass is a base pointer, it only points to the Base portion of cClass. However, also note that *__vptr is in the Base portion of the class, so pClass has access to this pointer. Finally, note that pClass->__vptr points to the D1 virtual table! Consequently, even though pClass is of type Base, it still has access to D1’s virtual table.

So what happens when we try to call pClass->function1()?

int main()
{
D1 cClass;
Base *pClass = &cClass;
pClass->function1();
}
First, the program recognizes that function1() is a virtual function. Second, uses pClass->__vptr to get to D1’s virtual table. Third, it looks up which version of function1() to call in D1’s virtual table. This has been set to D1::function1(). Therefore, pClass->function1() resolves to D1::function1()!

Now, you might be saying, “But what if Base really pointed to a Base object instead of a D1 object. Would it still call D1::function1()?”. The answer is no.


int main()
{
Base cClass;
Base *pClass = &cClass;
pClass->function1();
}
In this case, when cClass is created, __vptr points to Base’s virtual table, not D1’s virtual table. Consequently, pClass->__vptr will also be pointing to Base’s virtual table. Base’s virtual table entry for function1() points to Base::function1(). Thus, pClass->function1() resolves to Base::function1(), which is the most-derived version of function1() that a Base object should be able to call.

By using these tables, the compiler and program are able to ensure function calls resolve to the appropriate virtual function, even if you’re only using a pointer or reference to a base class!

Calling a virtual function is slower than calling a non-virtual function for a couple of reasons: First, we have to use the *__vptr to get to the appropriate virtual table. Second, we have to index the virtual table to find the correct function to call. Only then can we call the function. As a result, we have to do 3 operations to find the function to call, as opposed to 2 operations for a normal indirect function call, or one operation for a direct function call. However, with modern computers, this added time is usually fairly insignificant.

Source: http://www.learncpp.com/

Friday, January 29, 2010

Overloaded Assignment

When to define assignment (and a copy constructor and destructor)
If your object has a pointer to dynamically allocated memory, eg allocated in the constructor, you will want to make a deep copy of the object. Deep copies require overloading assignment, as well as defining a copy constructor and a destructor).

Example
//--- file Person.h
. . .
class Person {
private:
char* _name;
int _id;
public:
Person& Person::operator=(const Person& p);
. . .
}
//--- file Person.cpp
. . .
//=================================================== operator=
Person& Person::operator=(const Person& p) {
if (this != &p) { // make sure not same object
delete [] _name; // Delete old name's memory.
_name = new char[strlen(p._name)+1]; // Get new space
strcpy(_name, p._name); // Copy new name
_id = p._id; // Copy id
}
return *this; // Return ref for multiple assignment
}//end operator=

Shallow vs Deep Copies

A shallow copy of an object copies all of the member field values. This works well if the fields are values, but may not be what you want for fields that point to dynamically allocated memory. The pointer will be copied. but the memory it points to will not be copied -- the field in both the original object and the copy will then point to the same dynamically allocated memory, which is not usually what you want. The default copy constructor and assignment operator make shallow copies.

A deep copy copies all fields, and makes copies of dynamically allocated memory pointed to by the fields. To make a deep copy, you must write a copy constructor and overload the assignment operator, otherwise the copy will point to the original, with disasterous consequences.

Copy Constructor- what, why and types.

The compiler not only creates a default constructor for you if you do not specify your own. It provides three special member functions in total that are implicitly declared if you do not declare your own. These are the copy constructor, the copy assignment operator, and the default destructor.

When copies of objects are made
A copy constructor is called whenever a new variable is created from an object. This happens in the following cases (but not in assignment).

A variable is declared which is initialized from another object, eg,
Person q("Mickey"); // constructor is used to build q.
Person r(p); // copy constructor is used to build r.
Person p = q; // copy constructor is used to initialize in declaration.
p = q; // Assignment operator, no constructor or copy constructor.
A value parameter is initialized from its corresponding argument.
f(p); // copy constructor initializes formal value parameter.
An object is returned by a function.
C++ calls a copy constructor to make a copy of an object in each of the above cases. If there is no copy constructor defined for the class, C++ uses the default copy constructor which copies each field, ie, makes a shallow copy.

Don't write a copy constructor if shallow copies are ok
If the object has no pointers to dynamically allocated memory, a shallow copy is probably sufficient. Therefore the default copy constructor, default assignment operator, and default destructor are ok and you don't need to write your own.

If you need a copy constructor, you also need a destructor and operator=
If you need a copy constructor, it's because you need something like a deep copy, or some other management of resources. Thus is is almost certain that you will need a destructor and override the assignment operator.

Copy constructor syntax
The copy constructor takes a reference to a const parameter. It is const to guarantee that the copy constructor doesn't change it, and it is a reference because a value parameter would require making a copy, which would invoke the copy constructor, which would make a copy of its parameter, which would invoke the copy constructor, which ...

Here is an example of a copy constructor for the Point class, which doesn't really need one because the default copy constructor's action of copying fields would work fine, but it shows how it works.

//=== file Point.h =============================================
class Point {
public:
. . .
Point(const Point& p); // copy constructor
. . .
//=== file Point.cpp ==========================================
. . .
Point::Point(const Point& p) {
x = p.x;
y = p.y;
}
. . .
//=== file my_program.cpp ====================================
. . .
Point p; // calls default constructor
Point s = p; // calls copy constructor.
p = s; // assignment, not copy constructor.
Difference between copy constructor and assignment
A copy constructor is used to initialize a newly declared variable from an existing variable. This makes a deep copy like assignment, but it is somewhat simpler:

There is no need to test to see if it is being initialized from itself.
There is no need to clean up (eg, delete) an existing value (there is none).
A reference to itself is not returned.

Copy constructor is
a constructor function with the same name as the class
used to make deep copy of objects.
There are 3 important places where a copy constructor is called.

When an object is created from another object of the same type
When an object is passed by value as a parameter to a function
When an object is returned from a function
If a copy constructor is not defined in a class, the compiler itself defines one. This will ensure a shallow copy. If the class does not have pointer variables with dynamically allocated memory, then one need not worry about defining a copy constructor. It can be left to the compiler's discretion.

But if the class has pointer variables and has some dynamic memory allocations, then it is a must to have a copy constructor.

For ex:
class A //Without copy constructor
{
private:
int x;
public:
A() {A = 10;}
~A() {}
}
class B //With copy constructor
{
private:
char *name;
public:
B()
{
name = new char[20];
}
~B()
{
delete name[];
}
//Copy constructor
B(const B &b)
{
name = new char[20];
strcpy(name, b.name);
}
};
Let us Imagine if you don't have a copy constructor for the class B. At the first place, if an object is created from some existing object, we cannot be sure that the memory is allocated. Also, if the memory is deleted in destructor, the delete operator might be called twice for the same memory location.

This is a major risk. One happy thing is, if the class is not so complex this will come to the fore during development itself. But if the class is very complicated, then these kind of errors will be difficult to track

Checking the string if its a Palindrome- Program

#include < iostream >
#include < string >
using namespace std; // the namespace for cout<< & such functions

int main()
{
char strn[80];
cout<<"Enter the string: ";
cin.getline(strn,80);
int len=strlen(strn);

bool flag=true; // create a Boolean value, "flag" to be used in our loop

for(int c=0;c!=len/2;c++) // do a loop from 0 to half the length of the string
{
if(flag) // if it is a palindrome so far
{
if(strn[c]!=strn[len-c-1]) // check the characters match
{
flag=false; // if they don't set the indicator to false
}

}
else
{
break; // if it is not a palindrome, exit the for loop
}
}

// if flag is true cout "Palindrome" otherwise output "Not Palindrome"

if(flag)
{
cout<<"Palindrome";
}
else
{
cout<<"Not Palindrome";
}

cin.get();
return 0;
}

Everything about the this pointer (Basics for rookies)

The this pointer is used as a pointer to the class object instance by the member function. The address of the class instance is passed as an implicit parameter to the member functions. The sample below, in this c++ Tutorial shows how to use it. It is a common knowledge that C++ keeps only one copy of each member function and the data members are allocated memory for all of their instances. This kind of various instances of data are maintained use this pointer. Look at the sample below, in this c++ Tutorial.


Important notes on this pointer:
1. this pointer stores the address of the class instance, to enable pointer access of the members to the member functions of the class.
2. this pointer is not counted for calculating the size of the object.
3. this pointers are not accessible for static member functions.
4. this pointers are not modifiable.

Look at the following example to understand how to use the 'this' pointer explained in this C++ Tutorial.

class this_pointer_example // class for explaining C++ tutorial
{
int data1;
public:
//Function using this pointer for C++ Tutorial
int getdata()
{
return this->data1;
}
//Function without using this pointer
void setdata(int newval)
{
data1 = newval;
}
};

Thus, a member function can gain the access of data member by either using this pointer or not.

Thursday, January 28, 2010

Realloc to Malloc and Realloc to Free

realloc:
void * realloc ( void * ptr, size_t size );
Reallocate memory block

In case that ptr is NULL, the function behaves exactly as MALLOC, assigning a new block of size bytes and returning a pointer to the beginning of it.

eg. realloc (NULL, sizeof(int));

In case that the size is 0, the memory previously allocated in ptr is deallocated as if a call to free was made, and a NULL pointer is returned.

Wednesday, January 6, 2010

Why a programmer need to take care of freeing the memory?

Why a programmer need to take care of freeing the memory?


If you have a subroutine like this

int subroutine(int param){
char example_variable [200];

whatever the routine does....

}

is necessary to release the memory used by the variable example_variable??? or is this automatically released when the variable get out of scope???


variables are created in different spaces within a program.
the example[100] variable would be created within a stack which would be deleted as soon as the program terminates so the programmer doesn't need to take care of freeing that. but any address dynamically obtained is obtained from the heap which is a pool of memory of all the programs running and not specific to one. so a programmer needs to take care of freeing or deleting (basically returning the space back to the pool) when the purpose of that variable is served and no longer required.

operator new[] and delete[] C/C++

// operator delete[] example
#include < iostream >
#include < new >
using namespace std;

struct myclass {
myclass() {cout <<"myclass constructed\n";}
~myclass() {cout <<"myclass destroyed\n";}
};

int main () {
myclass * pt;

pt = new myclass[3];
delete[] pt;

return 0;
}

Output:

myclass constructed
myclass constructed
myclass constructed
myclass destroyed
myclass destroyed
myclass destroyed

new & delete in C/C++

new :
void* operator new (std::size_t size) throw (std::bad_alloc);
void* operator new (std::size_t size, const std::nothrow_t& nothrow_constant) throw();
Allocate storage space

The first version allocates size bytes of storage space, aligned to represent an object of that size, and returns a non-null pointer to the first byte of this block. On failure, it throws a bad_alloc exception.

The second version is the nothrow version. It does the same as the first version, except that on failure it returns a null pointer instead of throwing an exception.

operator new are declared in the global namespace, not in the std namespace.
The first and second versions are implicitly declared in every translation unit of a C++ program: The header does not need to be included for them to be present.

operator new can be called explicitly as a regular function, but in C++, new is an operator with a very specific behavior: An expression with the new operator, first calls function operator new with the size of its type specifier as first argument, and if this is successful, it then automatically initializes or constructs the object (if needed). Finally, the expression evaluates as a pointer to the appropriate type.


delete :
void operator delete (void* ptr) throw ();
void operator delete (void* ptr, const std::nothrow_t& nothrow_constant) throw();
Deallocate storage space

The first and second versions deallocate the memory block pointed by ptr (if not-null), releasing the storage space previously allocated to it by a call to operator new and making that pointer location invalid.

The second and third versions cannot be implicitly called by the operator expression (the delete operator calls once the function operator delete for each of its arguments). Although they can be called explicitly as operator new function calls, their default definitions serve no particular purpose - they are provided as counterparts for the operator new functions and called accordingly when done automatically.

operator delete can be called explicitly as a regular function, but in C++, delete is an operator with a very specific behavior: An expression with the delete operator, first calls the appropriate destructor (if needed), and then calls function operator delete to release the storage.

// operator delete example
#include < iostream >
#include < new >
using namespace std;

struct myclass {
myclass() {cout <<"myclass constructed\n";} ~myclass() {cout <<"myclass destroyed\n";} }; int main () { myclass * pt; pt = new myclass; delete pt; return 0; } Output: myclass constructed myclass destroyed Example of a new: #include < iostream >
#include < new >
using namespace std;

struct myclass {myclass() {cout <<"myclass constructed\n";}};

int main () {

int * p1 = new int;
// same as:
// int * p1 = (int*) operator new (sizeof(int));

int * p2 = new (nothrow) int;
// same as:
// int * p2 = (int*) operator new (sizeof(int),nothrow);

myclass * p3 = (myclass*) operator new (sizeof(myclass));
// (!) not the same as:
// myclass * p3 = new myclass;
// (constructor not called by function call, even for non-POD types)

new (p3) myclass; // calls constructor
// same as:
// operator new (sizeof(myclass),p3)

return 0;
}

Output: myclass constructed

malloc,calloc,realloc and free

malloc(),calloc(),realloc() & free():

malloc (memory allocation) is used to dynamically allocate memory at run time. Possible uses for this function are:
Read records of an unknown length.
Read an unknown number of database records.
Link lists.
The simplest way to reserve memory is to code something like:

main()
{
char string[1000];

strcpy (string, "Some text");
}
The example above has two problems:

If the data is less than 1000 bytes we are wasting memory.
If the data is greater than 1000 bytes the program is going to crash.
The 1000 bytes are reserved throught out the life of the program. If this was a long running program that rarely used the memory, it would again be wasteful.
malloc allows us to allocate exactly the correct amount of memory and with the use of free only for the time it is required.

Library: stdlib.h

Prototype: void *malloc(size_t size);

Syntax: char * String;

String = (char *) malloc(1000);
Looking at the example syntax above, 1000 bytes are reserved and the pointer String points to the first byte. The 1000 bytes are NOT initialized by malloc. If the memory is NOT available, a NULL pointer is returned.

The calloc() Function
The standard C library declares the function calloc() in as follows:

void *calloc(size_t elements, size_t sz);
calloc() allocates space for an array of elements, each of which occupies sz bytes of storage. The space of each element is initialized to binary zeros. In other words, calloc() is similar to malloc(), except that it handles arrays of objects rather than a single chunk of storage and that it initializes the storage allocated. The following example allocates an array of 100 int's using calloc():


int * p = (int*) calloc (100, sizeof(int));
Remember that in C++, you have better alternatives to calloc() and malloc(), namely new and new [], and that these C functions should only be used for allocating POD (Plain Old Data) objects; never class objects. However, if you're using C or maintaining legacy C code, you might get across this function.

realloc:
void * realloc ( void * ptr, size_t size );
Reallocate memory block

The size of the memory block pointed to by the ptr parameter is changed to the size bytes, expanding or reducing the amount of memory available in the block.

The function may move the memory block to a new location, in which case the new location is returned. The content of the memory block is preserved up to the lesser of the new and old sizes, even if the block is moved. If the new size is larger, the value of the newly allocated portion is indeterminate.

In case that ptr is NULL, the function behaves exactly as malloc, assigning a new block of size bytes and returning a pointer to the beginning of it.

In case that the size is 0, the memory previously allocated in ptr is deallocated as if a call to free was made, and a NULL pointer is returned.

Return Value:A pointer to the reallocated memory block, which may be either the same as the ptr argument or a new location.
The type of this pointer is void*, which can be cast to the desired type of data pointer in order to be dereferenceable.
If the function failed to allocate the requested block of memory, a NULL pointer is returned.


/* realloc example: rememb-o-matic */
#include < stdio.h >
#include < stdlib.h >

int main ()
{
int input,n;
int count=0;
int * numbers = NULL;

do {
printf ("Enter an integer value (0 to end): ");
scanf ("%d", &input);
count++;
numbers = (int*) realloc (numbers, count * sizeof(int));
if (numbers==NULL)
{ puts ("Error (re)allocating memory"); exit (1); }
numbers[count-1]=input;
} while (input!=0);

printf ("Numbers entered: ");
for ( n=0; n < count; n++) printf ("%d ",numbers[n]); free (numbers); return 0; } The program prompts the user for numbers until a zero character is entered. Each time a new value is introduced the memory block pointed by numbers is increased by the size of an int. free:
Example
/* free example */
#include
#include

int main ()
{
int * buffer1, * buffer2, * buffer3;
buffer1 = (int*) malloc (100*sizeof(int));
buffer2 = (int*) calloc (100,sizeof(int));
buffer3 = (int*) realloc (buffer2,500*sizeof(int));
free (buffer1);
free (buffer3);
return 0;
}


This program has no output. Just demonstrates some ways to allocate and free dynamic memory using the cstdlib functions.

Also, check out Free ISTQB Training Material here

www.testing4success.com - Application QA/Testing     Mobile App QA/Testing     Web Testing     Training
Iphone App QA  Android App QA  Web QA