Note: For this post, you'll need a pretty decent understanding of programming. I'll explain most of the words I use in the article, but I still expect you to know what an "object" is, and not just in the OOP sense.
Destructors are great. When I first learned C++ they seemed like magic, and now I'm mildly annoyed by languages that don't have them. The thing is, most languages don't have them. If you've only ever coded in Java or C#, you're probably wondering why I like such terrible, useless things, because you've never actually experienced destructors. Java has the finalize
overrideable method, and C# has something that it calls destructors (which even apes the C++ syntax), but neither one is a real destructor.
Object lifetimes
Before we can talk about what a destructor is and why it's great, we need to talk about something related: Object lifetimes. An object's lifetime is just how long it sticks around as usable, allocated memory. If you have some handle to that object, be it a pointer, a reference, a variable, whatever, then until the object's lifetime is over, you can continue using it.
In most languages, it's outright impossible to guarantee exactly when every object's lifetime ends. In some, though, they get closer than others. For example, in Java, the object lasts at least until it's no longer referenced, and then until the garbage collector gets around to collecting the garbage. You have no guarantees when, or even if, that happens.
On the other hand, languages like C++, Rust, and Swift have much more strictly defined object lifetimes. I won't go into the exact rules, but the rule of thumb is that once every variable referring to the object is out of scope, that object's lifetime ends immediately, rather than ending at some indeterminate point afterwards. I'll call these "strict" lifetimes, since they have a precise end time.
There is, of course, an exception with pointers in C++; in that case, an object's lifetime ends when it's delete
d.
One important corollary to note is that when an object's lifetime ends, so do the lifetimes of its fields.
Destructors
A destructor is a function that runs at the end of a strict lifetime. It's basically the inverse of a constructor, hence the name. You can absolutely guarantee that:
- A destructor will run, barring something very strange happening.
- It will run only after the object is no longer being used1.
- It will run soon after the object is no longer being used (mostly)
Number 3 is somewhat fuzzy, but if your language has strict lifetimes and you restrict things to the smallest scope you reasonably can, then the destructor will run quickly after the object's last usage.
Because a field's lifetime ends with its containing object, you don't need to explicitly call the destructors for your fields. They're called after the destructor for the containing object.
Why are they good?
There are two related things that destructors allow you to do: Simpler cleanup, and RAII.
Simpler cleanup
Have you ever coded anything complicated in C? You've probably got more than a few blocks of code like this:
struct foo *my_foo = construct_foo();
use_a_foo(my_foo);
use_it_again(my_foo);
destruct_foo(my_foo);
That's all well and good, until you remember that use_a_foo
might fail, and if it does, we shouldn't run use_it_again
. If use_a_foo
does fail, it'll return false
, so let's check that:
struct foo *my_foo = make_a_foo();
if (!use_a_foo(my_foo)) return false;
use_it_again(my_foo);
destruct_foo(my_foo);
The issue here is that now, sometimes, you're not properly destructing my_foo
. Sure, in this example it's trivial to fix, but it quickly gets out of hand, trying to keep the code legible while also correctly destructing every object.
In contrast, this is the equivalent C++:
foo my_foo;
if (!use_a_foo(my_foo)) return false;
use_it_again(my_foo);
The foo
destructor is called implicitly as soon as my_foo
goes out of scope, so you don't need to manage it. That means that whatever cleanup needs to be done -- closing file handles, deallocating memory, whatever. In C++ you also have exceptions, and if you throw an exception, the destructors will still be called when you get to a scope above the one the object is in.
RAII
RAII, short for "resource acquisition is initialization" is a fairly small leap, once you've realized that "cleanup" can be generalized a little. The name means that anything which takes control a resource -- be it a thread handle, a file handle, whatever -- does it when it's constructed, and then makes sure that it's cleaned up when it's destructed.
For example, this is a fairly common pattern for acquiring mutexes in C++:
{
std::lock_guard _l(the_mutex);
// ... do your processing ...
}
When it's constructed, a std::lock_guard
takes control of, by locking, a mutex. That way, no matter what happens in // ... do your processing ...
, unless you forcibly exit the entire thread somehow, you still unlock the mutex once that block is done, and it happens automatically.
But the concept has been expanded somewhat. Some types do error-checking on destruction, like std::thread
, which makes sure that the thread is either done executing and joined back to the parent thread, or detached so it can keep executing on its own.
So what's a finalizer?
A finalizer is like a destructor in that both run at the end of a lifetime, but the word "finalizer" is used when the lifetime isn't strict. Because the lifetime isn't strict, and because the lifetime might not even end until the program stops, you have no guarantee that the finalizer will run, or that it'll run "soon" after the object stops being used to any useful degree.
Now, to be clear, there's nothing inherently wrong with a language not having destructors. Most language provide some way around the lack. In C#, you can do this:
lock (object) {
// ... do your processing on object ...
}
That's arguably even better design than C++'s lock guards, since it makes it explicitly clear what is having its access controlled, rather than just that something is controlled. The lock statement will also release in a timely manner no matter what.
More generally, anything that implements IDisposable
can be basically given a destructor:
using (Foo f = new Foo()) {
// use f
}
There are only two minor issues with that:
- It's yet another piece of syntax to learn and remember when coding.
- You need to actually implement
IDisposable
, which might not be done in bad code2.
Both of those can be worked around if necessary, but compared to the inherent simplicity and elegance of RAII, it's frustrating. Again, it's not necessarily a bad design, just an uncomfortable one.
Why finalizers?
To put it simply, they're easier on both language designers and users. Just sort of handwaving away object destruction and saying "it'll go away eventually once you're done with it" is much easier than writing or learning, say, Rust's complex object lifetime semantics. Personally, I prefer Rust, because it does give me that information and avoids the overhead of garbage collection in most cases, but when you're just getting into coding, it's just another thing on the already large pile of things to learn.
Plus, destructors really aren't necessary for most languages. Java and C#, for example, already introduce the overhead of a virtualization layer; tossing a garbage collector into the mix isn't that much more. Add something like using
or try-with-resources and you have a workable alternative, albeit with a little more mental overhead.
Questions?
If you're still confused about this, feel free to leave comments! This can be a tough concept to wrap your head around, especially if you've never had any exposure to destructors before. If you're reading this with only a background in JavaScript... well, you have my condolences. It was probably tough. Thanks for sticking it out!
Footnotes
- C++ does allow you to call destructors manually (the syntax is
foo->~type_of_foo()
) but this is very much not recommended. It causes all sorts of issues because destructors typically depend on all three assumptions holding, and even if you do everything right you can still break things. - Interestingly, Java seems to have an epidemic of not using
AutoCloseable
. For a while I wondered why, but then I realized thatAutoCloseable
was only added in Java 7! Certainly frustrating if you're dealing with code so old it predates generics. Thankfully, it can be worked around fairly simply in most cases.
Top comments (1)
I did a workshop once on the very same topic for new programmers in our company. Bringing some deterministic life time to objects in C#.
And since I have a background in C++ too, I also like destructors and was frustrated of the lack of them in C# sometimes, when I wanted that objects life ended determinstically instead of some time (maybe).