I\'m coming from a Java background and have started working with objects in C++. But one thing that occurred to me is that people often use pointers to objects rather than t
"Necessity is the mother of invention." The most of important difference that I would like to point out is the outcome of my own experience of coding. Sometimes you need to pass objects to functions. In that case, if your object is of a very big class then passing it as an object will copy its state (which you might not want ..AND CAN BE BIG OVERHEAD) thus resulting in an overhead of copying object .while pointer is fixed 4-byte size (assuming 32 bit). Other reasons are already mentioned above...
It's very unfortunate that you see dynamic allocation so often. That just shows how many bad C++ programmers there are.
In a sense, you have two questions bundled up into one. The first is when should we use dynamic allocation (using new
)? The second is when should we use pointers?
The important take-home message is that you should always use the appropriate tool for the job. In almost all situations, there is something more appropriate and safer than performing manual dynamic allocation and/or using raw pointers.
In your question, you've demonstrated two ways of creating an object. The main difference is the storage duration of the object. When doing Object myObject;
within a block, the object is created with automatic storage duration, which means it will be destroyed automatically when it goes out of scope. When you do new Object()
, the object has dynamic storage duration, which means it stays alive until you explicitly delete
it. You should only use dynamic storage duration when you need it.
That is, you should always prefer creating objects with automatic storage duration when you can.
The main two situations in which you might require dynamic allocation:
When you do absolutely require dynamic allocation, you should encapsulate it in a smart pointer or some other type that performs RAII (like the standard containers). Smart pointers provide ownership semantics of dynamically allocated objects. Take a look at std::unique_ptr and std::shared_ptr, for example. If you use them appropriately, you can almost entirely avoid performing your own memory management (see the Rule of Zero).
However, there are other more general uses for raw pointers beyond dynamic allocation, but most have alternatives that you should prefer. As before, always prefer the alternatives unless you really need pointers.
You need reference semantics. Sometimes you want to pass an object using a pointer (regardless of how it was allocated) because you want the function to which you're passing it to have access that that specific object (not a copy of it). However, in most situations, you should prefer reference types to pointers, because this is specifically what they're designed for. Note this is not necessarily about extending the lifetime of the object beyond the current scope, as in situation 1 above. As before, if you're okay with passing a copy of the object, you don't need reference semantics.
You need polymorphism. You can only call functions polymorphically (that is, according to the dynamic type of an object) through a pointer or reference to the object. If that's the behavior you need, then you need to use pointers or references. Again, references should be preferred.
You want to represent that an object is optional by allowing a nullptr
to be passed when the object is being omitted. If it's an argument, you should prefer to use default arguments or function overloads. Otherwise, you should preferably use a type that encapsulates this behavior, such as std::optional
(introduced in C++17 - with earlier C++ standards, use boost::optional
).
You want to decouple compilation units to improve compilation time. The useful property of a pointer is that you only require a forward declaration of the pointed-to type (to actually use the object, you'll need a definition). This allows you to decouple parts of your compilation process, which may significantly improve compilation time. See the Pimpl idiom.
You need to interface with a C library or a C-style library. At this point, you're forced to use raw pointers. The best thing you can do is make sure you only let your raw pointers loose at the last possible moment. You can get a raw pointer from a smart pointer, for example, by using its get
member function. If a library performs some allocation for you which it expects you to deallocate via a handle, you can often wrap the handle up in a smart pointer with a custom deleter that will deallocate the object appropriately.
There are many use cases for pointers.
Polymorphic behavior. For polymorphic types, pointers (or references) are used to avoid slicing:
class Base { ... };
class Derived : public Base { ... };
void fun(Base b) { ... }
void gun(Base* b) { ... }
void hun(Base& b) { ... }
Derived d;
fun(d); // oops, all Derived parts silently "sliced" off
gun(&d); // OK, a Derived object IS-A Base object
hun(d); // also OK, reference also doesn't slice
Reference semantics and avoiding copying. For non-polymorphic types, a pointer (or a reference) will avoid copying a potentially expensive object
Base b;
fun(b); // copies b, potentially expensive
gun(&b); // takes a pointer to b, no copying
hun(b); // regular syntax, behaves as a pointer
Note that C++11 has move semantics that can avoid many copies of expensive objects into function argument and as return values. But using a pointer will definitely avoid those and will allow multiple pointers on the same object (whereas an object can only be moved from once).
Resource acquisition. Creating a pointer to a resource using the new
operator is an anti-pattern in modern C++. Use a special resource class (one of the Standard containers) or a smart pointer (std::unique_ptr<>
or std::shared_ptr<>
). Consider:
{
auto b = new Base;
... // oops, if an exception is thrown, destructor not called!
delete b;
}
vs.
{
auto b = std::make_unique<Base>();
... // OK, now exception safe
}
A raw pointer should only be used as a "view" and not in any way involved in ownership, be it through direct creation or implicitly through return values. See also this Q&A from the C++ FAQ.
More fine-grained life-time control Every time a shared pointer is being copied (e.g. as a function argument) the resource it points to is being kept alive. Regular objects (not created by new
, either directly by you or inside a resource class) are destroyed when going out of scope.
One reason for using pointers is to interface with C functions. Another reason is to save memory; for example: instead of passing an object which contains a lot of data and has a processor-intensive copy-constructor to a function, just pass a pointer to the object, saving memory and speed especially if you're in a loop, however a reference would be better in that case, unless you're using an C-style array.
The key strength of object pointers in C++ is allowing for polymorphic arrays and maps of pointers of the same superclass. It allows, for example, to put parakeets, chickens, robins, ostriches, etc. in an array of Bird.
Additionally, dynamically allocated objects are more flexible, and can use HEAP memory whereas a locally allocated object will use the STACK memory unless it is static. Having large objects on the stack, especially when using recursion, will undoubtedly lead to stack overflow.
In C++, objects allocated on the stack (using Object object;
statement within a block) will only live within the scope they are declared in. When the block of code finishes execution, the object declared are destroyed.
Whereas if you allocate memory on heap, using Object* obj = new Object()
, they continue to live in heap until you call delete obj
.
I would create an object on heap when I like to use the object not only in the block of code which declared/allocated it.