The submitted title is missing the salient keyword "finally" that motivates the blog post. The actual subtitle Raymond Chen wrote is: "C++ says “We have try…finally at home.”"
In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"
To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."
So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
Yeah it's a huge mistake IMO. I see it fucking up titles so frequently, and it flies in the face of the "do not editorialise titles" rule:
[...] please use the original title, unless it is misleading or linkbait; don't editorialize.
It is much worse, I think, to regularly drastically change the meaning of a title automatically until a moderator happens to notice to change it back, than to allow the occasional somewhat exaggerated original post title.
As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
While I disagree with you that it's "a huge mistake" (I think it works fine in 95% of cases), it strikes me that this sort of semantic textual substitution is a perfect task for an LLM. Why not just ask a cheap LLM to de-sensationalize any post which hits more than 50 points or so?
Personally, I would rather we have a lower bar for killing submissions quickly with maybe five or ten flags and less automated editorializing of titles.
A better approach would be to not so aggressively modify headlines.
Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
It has been up with the incorrect title for over 7 hours now. That's most of the Hacker News front-page lifecycle. The system for correcting bad automatic editorialisation clearly isn't working well enough.
Oh, come on man! These are trivial bugs. Whoever noticed it first should have sent the email to the mods. I did it before i posted my previous comment and i now see that the title has been changed appropriately.
Presumably nobody informed the mods (before i did) and it was very early in the morning in the US (assuming mods are based in the US). That would explain the delay.
Anyway, going forward, if anything like this happens again folks should simply shoot an email immediately to the mods and if the topic is really interesting deserving of more discussion they can always request the mods to keep the post on the frontpage for a longer time period via second-chance pool etc.
It just takes a minute or two of one's time and hence not worth getting het up over.
It would be easier for everyone involved, and not rely on mods being awake, if HN didn't just automatically drastically change the meaning of headlines.
Again, this post was misrepresenting Raymond's words for over 7 hours. That's most of its time on the front page. The current system doesn't work.
It's rare to see the mangling heuristics improve a title these days. There was a specific type of clickbait title that was overused at the time, so a rule was created. And now that the original problem has passed, we're stuck with it.
I'm curious about the actual origin now, given that a quick search shows only vague references or claim it is recent, but this meme is present in Eddie Murphys "Raw" from 1987, so it is at least that old.
Edit: A deep research run by Gemini 3.0 Pro says the origin is likely to be stand-up comedy routines between 1983–1987 and particularly mentions Eddie Murphy, and the 1983 socioeconomic precursor "You ain't got no McDonald's money" in Delirious (1983) culminating in the meme from in Raw (1987). So Eddie might very well be the original origin.
> So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.
It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
Destructors are vastly superior to the finally keyword because they only require us to remember a single time to release resources (in the destructor) as opposed to every finally clause. For example, a file always closes itself when it goes out of scope instead of having to be explicitly closed by the person who opened the file. Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks. Not to mention how branching and conditional initialization complicates things. You can often pair up constructors with destructors in the code so that it becomes very obvious when resource acquisition and release do not match up.
I couldn't agree more. And in the rare cases where destructors do need to be created inline, it's not hard to combine destructors with closures into library types.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
A writable file closing itself when it goes out of scope is usually not great, since errors can occur when closing the file, especially when using networked file systems.
You need to close it and check for errors as part of the happy path. But it's great that in the error path (be that using an early return or throwing an exception), you can just forget about the file and you will never leak a file descriptor.
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
Java solved it by having exceptions be able to attach secondary exceptions, in particular those occurring during stack unwinding (via try-with-resources).
The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.
The entire point of the article is that you cannot throw from a destructor. Now how do you signal that closing/writing the file in the destructor failed?
You are allowed to throw from a destructor as long as there's not already an active exception unwinding the stack. In my experience this is a total non-issue for any real-world scenario. Propagating errors from the happy path matters more than situations where you're already dealing with a live exception.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
Destructors and finally clauses serve different purposes IMO. Most of the languages that have finally clauses also have destructors.
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
> Most of the languages that have finally clauses also have destructors.
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
In C# the closest analogue to a C++ destructor would probably be a `using` block. You’d have to remember to write `using` in front of it, but there are static analysers for this. It gets translated to a `try`–`finally` block under the hood, which calls `Dispose` in `finally`.
using (var foo = new Foo())
{
}
// foo.Dispose() gets called here, even if there is an exception
Or, to avoid nesting:
using var foo = new Foo(); // same but scoped to closest current scope
These also is `await using` in case the cleanup is async (`await foo.DisposeAsync()`)
I think Java has something similar called try with resources.
try (var foo = new Foo()) {
}
// foo.close() is called here.
I like the Java method for things like files because if the there's an exception during the close of a file, the regular `IOException` block handles that error the same as it handles a read or write error.
void bar() {
try (var f = foo()) {
doMoreHappyPath(f);
}
catch(IOException ex) {
handleErrors();
}
}
File foo() throws IOException {
File f = openFile();
doHappyPath(f);
if (badThing) {
throw new IOException("Bad thing");
}
return f;
}
That said, I think this is a bad practice (IMO). Generally speaking I think the opening and closing of a resource should happen at the same scope.
Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
In Java, I agree with you that the opening and closing of a resource should happen at the same scope. This is a reasonable rule in Java, and not following it in Java is a recipe for errors because Java isn't RAII.
In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
As someone coming from RAII to C#, you get used to it, I'd say. You "just" have to think differently. Lean into records and immutable objects whenever you can and IDisposable interface ("using") when you can't. It's not perfect but neither is RAII. I'm on a learning path but I'd say I'm more productive in C# than I ever was in C++.
I agree with this. I don't dislike non-RAII languages (even though I do prefer RAII). I was mostly asking a rhetorical question to point out that it really isn't the same at all. As you say, it's not a RAII language, and you have to think differently than when using a RAII language with proper destructors.
I don't view finalizers and destructors as different concepts. The notion only matters if you actually need cleanup behavior to be deterministic rather than just eventual, or you are dealing with something like thread locals. (Historically, C# even simply called them destructors.)
There's a huge difference in programming model. You can rely on C++ or Rust destructors to free GPU memory, close sockets, free memory owned through an opaque pointer obtained through FFI, implement reference counting, etc.
I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
For GCed languages, I think finalizers are a mistake. They only serve to make it harder to reason about the code while masking problems. They also have negative impacts on GC performance.
Technically CPython has deterministic destructors, __del__ always gets called immediately when ref count goes to zero, but it's just an implementation detail, not a language spec thing.
It's not the same thing at all because you have to remember to use the context manager, while in C++ the user doesn't need to write any extra code to use the destructor, it just happens automatically.
Calling arbitrary callbacks from a destructor is a bad idea. Sooner or later someone will violate the requirement about exceptions, and your program will be terminated immediately. So I'd only use this pattern in -fno-exceptions projects.
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
I always wonder whether C++ syntax ever becomes readable when you sink more time into it, and if so - how much brain rewiring we would observe on a functional MRI.
It does... until you switch employers. Or sometimes even just read a coworker's code. Or even your own older code. Actually no, I don't think anyone achieved full readability enlightenment. People like me just hallucinated it after doing the same things for too long.
And yet, somehow Lisp continues to be everyone's sweetheart, even though creating literal new DSLs for every project is one of the features of the language.
Lisp doesnt have much syntax to speak of. All of the DSLs use the same basic structure and are easy to read.
Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
The syntax and the concepts (const, move, copy, etc) are orthogonal. You could possibly write a lisp / s-exp syntax for c++ and all it would make better would be the macros in the preprocessor. The DSL doesn't have to be hard to read if it uses unfamiliar/uncommon project specific concepts.
Well-designed abstractions do that in every language. And badly designed ones do the opposite, again in all languages. There's nothing special about Lisp here
Sure but it's you who singled out Lisp here. The whole point of DSL is designing a purpose formalism that makes a particular problem easy to reason about. That's hardly a parallel to ever-growing vocabulary of standard C++.
In my opinion, C++ syntax is pretty readable. Of course there are codebases that are difficult to read (heavily abstracted, templated codebases especially), but it's not really that different compared to most other languages. But this exists in most languages, even C can be as bad with use of macros.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
It does get easy to read, but then you unlock a deeper level of misery which is trying to work out the semantics. Stuff like implicit type conversions, remembering the rule of 3 or 5 to avoid your std::moves secretly becoming a copy, unwittingly breaking code because you added a template specialization that matches more than you realized, and a million others.
"using namespace std;" goes a long way to make C++ more readable and I don't really care about the potential issues. But yeah, due to a lack of a nice module system, this will quickly cause problems with headers that unload everything into the global namespace, like the windows API.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
Last time I tried, modules were a spuriously supported mess. I'll give them another try once they have ironclad support in cmake, gcc, clang and Visual Studio.
I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.
A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.
It's easy to demonstrate that destructors run after evaluating `return` in Rust:
struct PrintOnDrop;
impl Drop for PrintOnDrop {
fn drop(&mut self) {
println!("dropped");
}
}
fn main() {
let p = PrintOnDrop;
return println!("returning");
}
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.
EDIT: I don’t think you can actually put a return in a defer, I may have misremembered, it’s been several years. Disregard this comment chain.
It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
func getInt() -> Int {
let i: Int // declared but not
// defined yet!
defer { return i }
// all code paths must define i
// exactly once, or it’s a compiler
// error
if foo() {
i = 0
} else {
i = 1
}
doOtherStuff()
}
No, I actually misremembered… you can’t return in a defer.
The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:
fn callFoo() -> FooResult {
let fooParam: Int // declared, not defined yet
defer {
// fooParam must get defined by the end of the function
foo(fooParam)
otherStuffAfterFoo() // …
}
// all code paths must assign fooParam
if cond {
fooParam = 0
} else {
fooParam = 1
return // early return!
}
doOtherStuff()
}
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.
Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.
This is a good “how C++ does it” explanation, but I think it’s more accurate to say destructors implement finally-style cleanup in C++, not that they are finally. finally is about operation-scoped cleanup; destructors are about ownership. C++ just happens to use the same tool for both.
> In Java, Python, JavaScript, and C# an exception thrown from a finally block overwrites the original exception, and the original exception is lost.
Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)
The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.
Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.
(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
It's a snowclone based on the meme, "Mom, can we get <X>? No, we have <X> at home." : https://www.google.com/search?q=%22we+have+x+at+home%22+meme
In other words, Raymond is saying... "We already have Java feature of 'finally' at home in the C++ refrigerator and it's called 'destructor'"
To continue the meme analogy, the kid's idea of <X> doesn't match mom's idea of <X> and disagrees that they're equivalent. E.g. "Mom, can we order pizza? No, we have leftover casserole in the fridge."
So some kids would complain that C++ destructors RAII philosophy require creating a whole "class X{public:~X()}" which is sometimes inconvenient so it doesn't exactly equal "finally".
As it stands, the HN title suggests that Raymond thinks the C++ 'try' keyword is a poor imitation of some other language's 'try'. In reality, the post is about a way to mimic Java's 'finally' in C++, which the original title clearly (if humorously) encapsulates. Raymond's words have been misrepresented here for over 4 hours at this point. I do not understand how this is an acceptable trade-off.
Relying on somebody to detect the error, email the mods (significant friction), and then hope the mods act (after discussion has already been skewed) is not really a great solution.
Anyway, going forward, if anything like this happens again folks should simply shoot an email immediately to the mods and if the topic is really interesting deserving of more discussion they can always request the mods to keep the post on the frontpage for a longer time period via second-chance pool etc.
It just takes a minute or two of one's time and hence not worth getting het up over.
Again, this post was misrepresenting Raymond's words for over 7 hours. That's most of its time on the front page. The current system doesn't work.
Edit: A deep research run by Gemini 3.0 Pro says the origin is likely to be stand-up comedy routines between 1983–1987 and particularly mentions Eddie Murphy, and the 1983 socioeconomic precursor "You ain't got no McDonald's money" in Delirious (1983) culminating in the meme from in Raw (1987). So Eddie might very well be the original origin.
Those figurative kids would be stuck in a mental model where they try to shoehorn their ${LanguageA} idioms onto applications written in ${LanguageB}. As the article says, C++ has destructors since the "C with Classes" days. Complaining that you might need to write a class is specious reasoning because if you have a resource worth managing, you already use RAII to manage it. And RAII is one of the most fundamental and defining features of C++.
It all boils down to whether one knows what they are doing, or even bothers to know what they are doing.
To point at one example: we recently added `std::mem::DropGuard` [1] to Rust nightly. This makes it easy to quickly create (and dismiss) destructors inline, without the need for any extra keywords or language support.
[1]: https://doc.rust-lang.org/nightly/std/mem/struct.DropGuard.h...
https://github.com/isocpp/CppCoreGuidelines/issues/2203
You may need to unlink the file in the error path, but that's best handled in the destructor of a class which encapsulates the whole "write to a temp file, rename into place, unlink on error" flow.
The result is an exception tree that reflects the failures that occurred in the call tree following the first exception.
For example: you can't write to a file because of an I/O error, and when throwing that exception you find that you can't close the file either. What are you going to do about that other than possibly log the issue in the destructor? Wait and try again until it can be closed?
If you really must force Java semantics into it with chains of exception causes (as if anybody handled those gracefully, ever) then you can. Get the current exception and store a reference to the new one inside the first one. But I would much rather use exceptions as little as possible.
> Syntax is also less cluttered with less indentation, especially when multiple objects are created that require nested try... finally blocks.
I think that's more of a point against try...catch/maybe exceptions as a whole, rather than the finally block. (Though I do agree with that. I dislike that aspect of exceptions, and generally prefer something closer to std::expected or Rust Result.)
Hm, is that true? I know of finally from Java, JavaScript, C# and Python, and none of them have proper destructors. I mean some of them have object finalizers which can be used to clean up resources whenever the garbage collector comes around to collect the object, but those are not remotely similar to destructors which typically run deterministically at the end of a scope. Python's 'with' syntax comes to mind, but that's very different from C++ and Rust style destructors since you have to explicitly ask the language to clean up resources with special syntax.
Which languages am I missing which have both try..finally and destructors?
I think Java has something similar called try with resources.
Making it non-local is a recipe for an accident.
*EDIT* I've made a mistake while writing this, but I'll leave it up there because it demonstrates my point. The file is left open if a bad thing happens.
In C++ and Rust, that rule doesn't make sense. You can't make the mistake of forgetting to close the file.
That's why I say that Java, Python and C#'s context managers aren't remotely the same. They're useful tools for resource management in their respective languages, just like defer is a useful tool for resource management in Go. They aren't "basically RAII".
I've had the displeasure of fixing a Go code base where finalizers were actively used to free opaque C memory and GPU memory. The Go garbage collector obviously didn't consider it high priority to free these 8-byte objects which just wrap a pointer, because it didn't know that the objects were keeping tens of megabytes of C or GPU memory alive. I had to touch so much code to explicitly call Destroy methods in defer blocks to avoid running out of memory.
Java is actively removing it's finalizers.
They are fundamentally different concepts.
See Destructors, Finalizers, and Synchronization by Hans Boehm - https://dl.acm.org/doi/10.1145/604131.604153
Sure destructors are great but you still want a "finally" for stuff you can't do in a destructor
You can argue that RAII is more elegant, because it doesn't add one mandatory indentation level.
If you can't, it's not remotely "basically the same as C++ RAII".
In a similar vein, care must be taken when calling arbitrary callbacks while iterating a data structure - because the callback may well change the data structure being iterated (classic example is a one-shot event handler that unsubscribes when called), which will break naïvely written code.
Cpp has A LOT A of syntax: init rules, consts, references, move, copy, templates, special cases, etc. It also includes most of C, which is small but has so many basic language design mistakes that "C puzzles" is a book.
By far the worst in this aspect has been Scala, where every codebase seems to use a completely different dialect of the language, completely different constructs etc. There seems to have very little agreement on how the language should be used. Much, much less than C++.
I wish we had something like Javascript's "import {vector, string, unordered_map} from std;". One separate using statement per item is a bit cumbersome.
I have thoroughly forgotten which header std::ranges::iota comes from. I don't care either.
(1) Why doesn't it look like C++?
(2) Why does it look so much like C++?
> whether C++ syntax ever becomes readable when you sink more time into it,
Yes, and the easy approach is to learn as you need/go.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
https://docs.rs/defer-rs/latest/defer_rs/
It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.Pet peeve of mine: all these languages got it wrong. (And C++ got it extra-wrong.)
The error you want to log or report to the user is almost certainly the original exception, not the one from the finally block. The error from the finally block is probably a side effect of the original exception. Reporting the finally exception obscures information about the root cause, making it harder to debug the problem.
Many of these languages do attach the original exception to the new exception in some way, so you can get at it if you need to, but whatever actually catches and logs the exception later has to go out of its way to make sure to log the root cause rather than some stupid side effect. The hierarchy should be reversed: the exception thrown by `finally` should be added as an attachment to the original exception, perhaps placed in a list of "secondary" errors. Or you could even just throw it away, honestly the original exception is almost always all you care about anyway.
(C++ of course did much worse by just crashing in this scenario. I imagine this to be the outcome of some debate in the committee where they couldn't decide which exception should take priority. And now everyone has internalized this terrible decision by saying "well, destructors shouldn't throw" without seeming to understand that this is equivalent to saying "destructors shouldn't have bugs". WELL OF COURSE THEY SHOULDN'T BUT GOOD LUCK WITH THAT.)
how old is this post that 3.2 is "now"?
In Java the following is perfectly valid:
try { throw new IllegalStateException("Critical error"); } finally { return "Move along, nothing to see here"; }