panic in a no-unwind function leads to not dropping local variables
Raised by @CAD97:
I actually find the current behavior of
#![feature(c_unwind)]unwinding inextern "C"somewhat strange. Specifically, any unwinding edges within theextern "C" fnget sent tocore::panicking::panic_cannot_unwind, meaning that the unwind happens up to theextern "C" fn, but any locals in said function don't get dropped. I would personally not expect wrapping the body of anextern "C"function in an innerextern "Rust"function to change behavior, but it does.
Reproducing example:
#![feature(c_unwind)]
struct Noise;
impl Drop for Noise {
fn drop(&mut self) {
eprintln!("Noisy Drop");
}
}
extern "C" fn test() {
let _val = Noise;
panic!("heyho");
}
fn main() {
test();
}
I would expect "Noisy Drop" to be printed, but it is not.
IMO it'd make most sense to guarantee that with panic=unwind, this destructor is still called. @nbdd0121 however said they don't want to guarantee this.
What is the motivation for leaving this unspecified? The current behavior is quite surprising. If I understand @CAD97 correctly, we currently could make "Noisy Drop" be executed by tweaking the MIR we generate.
Nominating for t-lang discussion to get their vibe on this question.
There's prior discussion here: https://rust-lang.zulipchat.com/#narrow/stream/210922-project-ffi-unwind/topic/Behaviour.20of.20destructors.20wrt.20terminate.2Fabort
Wow that was years ago. What's the take-away from that discussion?
As pointed out in the discussion, changing how we insert abort to functions is sufficient to change the observed behaviour of the implementation, but the key is to decide what's the allowed behaviour of any implementation.
Options:
- To not call any destructors if panic happens in no-unwind context.
- Is quite desirable and can be helpful for debugging
- This is currently the behaviour for MSVC SEH in cleanup context.
- We probably can't implement this behaviour for all platforms.
- To call all destructors before the unwind.
- Additional code needs to be injected to guarantee this (especially for MSVC SEH cleanup case)
- Questionable value about running all destructors if abort is to happen.
- Leave it unspecified whether destructors will be called at all, or how many call frames are to be unwound before aborting.
- Maximum flexibility for implementation
- Consistent with C++'s
std::terminatespecification. - Can avoid adding landing pads and destructor code if we know for certain abort is about to happen.
Some additional complexity involves a foreign exception. For example, if we have a mixture of stack frames with C++ and Rust then any specification may result in surprises. E.g. when a nounwind frame is introduced by a C++ noexcept function, it's up to C++ personality function to decide whether a Rust panic may trigger C++ std::terminate in phase 1 unwinding before any Rust destructor is called, or trigger in phase 2 after Rust destructor is called and terminate when it reaches C++. If we have a C++ -> Rust -> C++ -> Rust call stack if will mean that we might have some Rust frames have destructors called but not other Rust frames.
cc @rust-lang/wg-ffi-unwind
To not call any destructors if panic happens in no-unwind context.
Judging from prior discussion, "no-unwind context" is a very specific technical term here and not something visible in Rust? Or what exactly do you mean?
Is quite desirable and can be helpful for debugging
Why? If by this you mean the current behavior, I find it undesirable and confusing and thus hindering debugging.
Or do you mean that the panic will somehow predict whether it will during its unwinding hit a no-unwind stack frame and then change behavior early on? That's spooky-action-at-a-distance, so I also don't think that's desirable.
Maximum flexibility for implementation
That's a pretty poor argument IMO, our job is to provide consistent and predictable semantics to our users whenever that is possible with reasonable performance.
Some additional complexity involves a foreign exception.
For this issue I only care about Rust panics.
Let me explain in greater detail, this'll be long..
For unwinding, there are a few type of cases for nounwind/noexcept/whatever:
- A stack frame contains no unwind metadata / reaches end of frame
- Personality function determines the callsite is nounwind and unwinding should terminate
- There's a try/catch and the exception handler calls terminate.
Unwinders can either do single phase unwinding, or do two phase unwinding. For the latter, it do a unwind first without calling any destructors to find the frame that will catch the exception, and then starts an unwind with cleanups. If a catching frame cannot be found, the unwinding process fails at phase 1 (Let's ignore forced unwind for now). The Itanium C++ ABI unwinder has two phases.
So if you have a C code (compiled without unwind tables) that calls into Rust C-unwind function and panics, then it's case (1), and phase 1 will fail. No destructor will be executed.
A C++ noexcept function is of case (2) in GCC. With GCC's personality function implementation, the phase 1 will consider a noexcept function as a catching frame, and complete (stop at this frame) without failing. When unwind happens into the noexcept frame, no cleanup is performed and a termination happens immediately (similar to Rust's behaviour today).
A C++ noexcept function is of case (3) in Clang because it doesn't yet support encoding the information to be exposed to personality function. noexcept is codegened as try {...} catch(...) { terminate() } (roughly). Since it's a catch, phase 1 will stop at the frame. Upon phase 2 reaching this frame, it will execute terminate() without calling any destructors in the final frame (since it's a try/catch and thus terminate is executed with the variables still in-scope, so rightfully the destructor of the last frame is skipped). This is also similar to Rust's behaviour today.
There's a subtle difference between (2) and (3) w.r.t to optimisation. Say we have this code:
#include <cstdio>
struct D {
~D() {
fprintf(stderr, "Drop!\n");
}
};
static void foo() {
D d;
throw "";
}
static void bar() noexcept {
D d;
foo();
}
int main() {
bar();
}
in both GCC and Clang, you will get one Drop! print only. If you turn on GCC's optimisation though, you will get no Drop! prints. You still get one Drop! print with Clang + opt. In no cases you get two prints. They're all acceptable behaviour because C++ spec says in this case it is unspecified whether the destructors are called.
The Rust behaviour today is very similar to clang's behaviour in the example.
Now to answer your questions
Or do you mean that the panic will somehow predict whether it will during its unwinding hit a no-unwind stack frame and then change behavior early on? That's spooky-action-at-a-distance, so I also don't think that's desirable.
Yes, I mean this. Since unwind is of two phases, the phase 1 can determine if unwind is possible and can skip phase 2 entirely. It's already the case that Rust panic will cause nounwind at all if it escapes into functions with no unwind tables, or, with MSVC SEH, into cleanup code.
It's helpful for debugging because all the stack frames are intact so you can inspect all frames upon abort. Currently when we unwind, hit an extern "C" frame, and print a panic-cannot-unwind message. If RUST_BACKTRACE is not enabled, then the first panic prints no stack trace and you already lost the information when abort happens. What could been done is to use phase 1 to figure out that unwind will cause terminate, and then print stack trace, along with an indication of which frame prevents the unwinding from happening.
That's a pretty poor argument IMO, our job is to provide consistent and predictable semantics to our users whenever that is possible with reasonable performance.
Given FFI is very important regard to extern "C" and all unwindable ABIs, I think it's also important to consider consistency with other languages. At detailed above, the implementation today is very consistent with C++ implementation's behaviour.
We certainly can define the behaviour to be "all destructors" being executed. It'll simply be requiring adding try/catch around all extern "C" functions. However, I am not sure it'll be desirable. When GCC backend is getting better or if LLVM adds support for encoding an "terminate" action, this will prevent us from moving over from case (3) to case (2) which can reduce code size quite significantly.
If leaving this unspecified allows better optimisation w.r.t. landing pad sizes, then IMO we should allow such optimisation given that a panic escaping to unwindable FFI interface is very rare and almost always a bug (given abort is imminent). If one wants all destructor to be called, they can very easily implement that behaviour with catch_unwind.
Some additional complexity involves a foreign exception.
This will happen with Rust panic in presence with foreign frames, as well as foreign exception with Rust frames. I think it'll less consistent if our specification of Rust panic behaviour depends on whether a foreign frame is present or not.
I'll reply in full later, for now I have a clarification question:
The Itanium C++ ABI unwinder has two phases.
Itanium is dead, why should we care?
2-phase unwinding sounds like a lot of unnecessary complexity to me.^^ But I guess they had their reasons.
The Itanium C++ ABI is the abi used by gcc and clang on most non-windows targets. The ABI was standardized for IA-64, hense the name, but the spec is used on many platforms.
Rust already uses Itanium EH for panics on the same set of targets.
The Itanium C++ ABI unwinder has two phases.
Itanium is dead, why should we care?
Basically every UNIX uses the same unwinder ABI as replacement for the old SjLj unwinder ABI which had non-zero overhead even when not throwing any exceptions. arm32 iOS is the only SjLj target we support(ed).
Thanks for explaining the Itanium thing.
While exploring what C++ does is interesting, I don't think C++ is necessarily a good guiding star to follow. We tend to value cross-platform consistency and predictability much more than C++ does. Having the number of drops depend on the optimization level sounds completely unacceptable to me.
So, ignoring what C++ does -- what are the downsides to saying that consistently, everything must be dropped until the boundary, i.e. even in the last stackframe?
Yes, I mean this. Since unwind is of two phases, the phase 1 can determine if unwind is possible and can skip phase 2 entirely. It's already the case that Rust panic will cause nounwind at all if it escapes into functions with no unwind tables, or, with MSVC SEH, into cleanup code.
What exactly does this mean? You are assuming that I know what all these words mean. :) Can you state this in terms of what the Rust programmer sees as end-to-end behavior?
It's helpful for debugging because all the stack frames are intact so you can inspect all frames upon abort.
That argument applies to all panics. You are suggesting to make debugging better for some small subclass of panics. I don't think it's worth doing this only for "panics that happen to lead to an abort later". In fact I think that makes debugging worse because for some panics you'll see the full stack and for some you won't.
Instead, just set a breakpoint on some symbol inside the panic machinery. (AFAIK we have a dedicated symbol for that?) Or set panic=abort. In both cases the debugger will reliably trap before unwinding begins.
Given FFI is very important regard to extern "C" and all unwindable ABIs, I think it's also important to consider consistency with other languages. At detailed above, the implementation today is very consistent with C++ implementation's behaviour.
I think consistency with C++ is just as often something we explicitly don't want as we disagree with the C++ design philosophy. I also doubt most C++ programmers will even know that this is how C++ behaves, so the consistency only helps those few people that know the ins and outs of how unwinding is implemented.
If leaving this unspecified allows better optimisation w.r.t. landing pad sizes, then IMO we should allow such optimisation given that a panic escaping to unwindable FFI interface is very rare and almost always a bug (given abort is imminent). If one wants all destructor to be called, they can very easily implement that behaviour with catch_unwind.
All panics are always a bug.
The question is whether those landing pad size wins are worth it for the extra confusion that inconsistent behavior will cause. And as I said above I think making this opt-level-dependent is completely inacceptable. That would mean if I see a panic in my release build and then try to debug it in a debug build it will behave completely differently! Maybe it's okay to say that behavior can differ between targets and between Rust versions, but I don't think we want any more variability than that.
@CAD97 also makes a good point:
I've personally settled into accepting turning an unwind through drop glue into an abort (e.g. by adding an abort call into the unwinding pads) as a practical choice. But not if an abort occurs without ever saying why, or at least calling the unwind handler with can_unwind=false.
If we allow the "unwind phase 1" to determine that unwinding will be skipped entirely, we end up with a can_unwind=true panic leading to an immediate abort. It can be quite hard for users to figure out what is even going on there, and why their destructors are not being executed.
In contrast, today we get a pretty nice error, where first there's a regular panic message and then when it hits a nounwind function (or drop, with the flag enabled), it prints a secondary message explaining why the panic was turned into an abort. (At least we get that on Linux. No idea if we reliably get it on all targets.)
Instead, just set a breakpoint on some symbol inside the panic machinery. (AFAIK we have a dedicated symbol for that?) Or set panic=abort. In both cases the debugger will reliably trap before unwinding begins.
That won't work if you didn't have a debugger attached from the start, but are relying on a coredump produced at the point of the SIGILL.
..., our job is to provide consistent and predictable semantics to our users whenever that is possible with reasonable performance.
It's difficult for me to object to this, because I think the principle is good. But I'm not sure that I agree it applies to specifying the precise behavior of a program that is already in the process of early termination.
A primary goal expressed to the working group from the start was to preserve unwind implementation flexibility, at least in terms of what is formally guaranteed. In fact, RFC-2945 originally did not even guarantee that a foreign exception entering Rust via extern "C" would be caught and trigger an abort.
It was also expressed that one downside of proposed changes to extern "C" is that it makes the ABI string a sort of half-baked effects system for exceptions, which is not really part of the proper role of an effect system. (Similar feedback also came from outside the Rust project; a Clang developer said something like "the ABI is not a sandbox" to me.)
The person inside the Rust project who expressed these concerns is no longer active in the project. But I nevertheless think we should be very cautious about introducing strong guarantees around the "abort" behavior unless we are very confident that they can be upheld on every platform on which we might wish for Rust to run, without jeopardizing performance.
All panics are always a bug.
Unfortunately, I'm not sure this is actually true, especially in the context of cross-FFI unwinding. One simple counterexample is allocator exhaustion: yes, this can often indicate a memory leak, but it's also possible that someone is simply running too much on a particular device.
As I have cheerily mentioned several times: I work on a library that catches longjmps, translates them into panics, and then translates them back into longjmps, using a baroque mechanism for making this actually conform to Rust's expected control flow semantics. A panic does not necessarily indicate a bug in the code that anyone using my library can actually control, because they do not necessarily have that much input into when the C code decides it wants to throw its home-baked "exceptions", and the alternatives to panicking when we run into these tend to be... worse. So I would turn it around to a different angle:
Even assuming it is "always" a bug, what should anyone do about it?
All panics are always a bug.
I said this in reply to a claim that "panics that will abort are a bug, therefore we can do weird semantics that make little sense unless one has studied unwinding ABIs for years". (I may have rephrased the argument a bit. ;) I don't think that's a valid argument, because sane behavior is important even in the presence of bugs. That's why UB is so nasty, it's the kind of bug where we don't have sane behavior any more. But here we're talking about cases which are explicitly not UB, they abort in a safe way, and I really don't think we should have UB-level of "spooky action at a distance" here -- something like the GCC behavior described above where with more optimizations, fewer destructors run. In terms of being able to debug and make sense of the situation, that's almost as bad as the nastiness one can see with UB. We have to ensure that will never happen.
It's difficult for me to object to this, because I think the principle is good. But I'm not sure that I agree it applies to specifying the precise behavior of a program that is already in the process of early termination.
The problem is, your definition of "being in the process of early termination" requires predicting the future. That goes entirely against the basic principles of an operational semantics, where we define step by step what happens.
I would like for unwinding to be a step-by-step process that just proceeds stack frame by stack frame. That would be a sane semantics people can understand and Miri can implement. But having to predict whether we will abort due to a condition that only becomes apparent later is a complete mess.
Basically, I am objecting to including anything like 2-phase-unwind into the opsem of Rust. 2-phase-unwind is an implementation detail, I don't see good reason why it should be in the spec. And without 2-phase-unwind, it must be the case that a panic that will abort 10 stack frames down, and a panic that will not abort, behave the same, since we can't predict the future.
(And even worse than having 2-phase-unwind in the spec would be having it in the spec only sometimes. That's just a nightmare scenario. And it seems like only some target do 2-phase-unwind so we couldn't even make the spec say that we always do 2-phase unwind.)
It was also expressed that one downside of proposed changes to extern "C" is that it makes the ABI string a sort of half-baked effects system for exceptions, which is not really part of the proper role of an effect system. (Similar feedback also came from outside the Rust project; a Clang developer said something like "the ABI is not a sandbox" to me.)
(I think this is not really related to this discussion, but I can't help but reply.^^)
I agree the extern "C" behavior here is somewhat surprising; I didn't expect this outcome either when the C-unwind work started. However it is a direct consequence of having ABIs where unwinding is UB, having panics in safe code, and having memory safety. (So I am not surprised a clang developer felt this looked strange, since I would not expect them to consider the memory safety implications.) We only have two choices: make extern "C" unsafe to write and put the responsibility of catching all panics onto the user, or make it safe and put that responsibility onto the compiler. I think you made the right choice here.
Also this is not like an effect system. An effect system would track which functions may or may not unwind, and just reject the code when you call a may-unwind function in an extern "C" function. That would be a third alternative besides "make it unsafe" and "make the compiler catch unwinding".
Using an optimized ABI is totally a possible role of an effect system, just like it is a role of a type system to enable optimized data representations. (Or did you mean to say "not really part of the proper role of an ABI"?)
honestly I think, ironically, that C++ sometimes is a good model for unwinding...
...specifically, MSVC.
my understanding is Windows adopts a different approach than the Itanium ABI unwinding, from the ground level up: the by-default behavior is to unify all mechanisms of nonlocal control flow for all languages compiled on it. that means it doesn't matter if you are a Rust panic, C++ exception, C longjmp, or even bare assembly: you get to participate in structured exception handling. throwing an exception, longjmp, and so on are all the same unwinding mechanism, by default, so everything works the same and everyone can catch and rethrow and in general understand each other's errors, even if C only sees all exceptions as int. and as far as I can tell the semantics tend to be about as mercilessly straightforward as Ralf would like, instead of having odd 2-phase properties.
the implementation of Visual C++ does have a compile option that allows C++ code to choose whether the try-catch interacts with the same exceptions C does by default, for reasons that are not clear to me. I believe the C++ code can still use __try and __except like C can, however.
for platforms that have this quirk it is probably very useful to preserve it.
Basically, I am objecting to including anything like 2-phase-unwind into the opsem of Rust. 2-phase-unwind is an implementation detail, I don't see good reason why it should be in the spec. And without 2-phase-unwind, it must be the case that a panic that will abort 10 stack frames down, and a panic that will not abort, behave the same, since we can't predict the future.
This means that, for it to work with Itanium EH (again, used by almost every non-windows platform) and exceptions thrown by C++, the abort edge in an extern "C" function must be a handler (which is costly, and might cause some additional issues). I had expected being able to handle it the same way as a noexcept function in C++, thus using a smaller frame table (with fewer edge cases than effectively being a bloody catch handler).
It's hard to be consistent with semantics across all platforms when unwinding itself isn't.
I believe the C++ code can still use __try and __except like C can, however.
This is necessary for some OS APIs. In Rust we have to resort to C shims to use such APIs, which isn't great.
This means that, for it to work with Itanium EH (again, used by almost every non-windows platform) and exceptions thrown by C++, the abort edge in an extern "C" function must be a handler (which is costly, and might cause some additional issues). I had expected being able to handle it the same way as a noexcept function in C++, thus using a smaller frame table (with fewer edge cases than effectively being a bloody catch handler). It's hard to be consistent with semantics across all platforms when unwinding itself isn't.
I am primarily concerned with unwinding from Rust panics, not exceptions triggered by other languages. I don't have as strong opinions on how C++ exceptions should behave as they pass through Rust stack frames. AFAIK we don't currently say much about what the rules even are there?
SEH rules are indeed nice and consistent, but we aren't ever going to have a world where they're the only rules.
don't have as strong opinions on how C++ exceptions should behave as they pass through Rust stack frames
Frankly, I'd expect them to interact with destructors the same way for consistency, both as a user, and an implementor here.
From https://github.com/rust-lang/rust/issues/123231#issuecomment-2029684587:
must be a handler (which is costly, and might cause some additional issues)
Don't we expose handlers via std::panic::catch_unwind already? My expectation is that at least in some cases I will be applying that to the body of every extern "C" exposed function to propagate panics as error codes. If there are problems with that (UB in some cases? Slow code? etc.) that would be great to know + document somewhere.
My mental model here aligns pretty closely with @RalfJung's I think: if the "extra cost" to making Drop behave the same way (e.g., if I'm debugging and stick an eprintln! in a Drop, I want to see it, even if the program aborts some amount of frames later!) regardless of whether there's some extern "C" function somewhere is just a bit of extra code in the binary, then I'd happily pay that price. Presumably, that code can be removed if LLVM (or Rust, via an effect system eventually) is able to statically prove a lack of Drop-requiring objects, too, and is necessary in every other function defined in Rust, right?
Don't we expose handlers via std::panic::catch_unwind already? My expectation is that at least in some cases I will be applying that to the body of every extern "C" exposed function to propagate panics as error codes. If there are problems with that (UB in some cases? Slow code? etc.) that would be great to know + document somewhere.
It's EH Table size, and function size mostly IIRC. Compared to catch_unwind that inserts one handler somewhere (and doesn't even fully try to handle foreign exceptions, at least in the rustc impl), that's a lot nicer than adding a bunch of extra full handlers should interact properly with foreign exceptions, including C++. Specifically, we need to actually act in the two-phase system, returning _URC_HANDLER_FOUND in the search phase, and then actually jumping to the landing pad in the unwind phase.
The problem is, your definition of "being in the process of early termination" requires predicting the future. That goes entirely against the basic principles of an operational semantics, where we define step by step what happens.
It doesn't have to be future predicting. You can define it as each frame containing metadata indicating if unwinding through it is allowed, and then when unwinding happens, look at stack frames to see if an nounwind one will be reached before a handler. In fact that's exactly how phase 1 works.
Basically, I am objecting to including anything like 2-phase-unwind into the opsem of Rust. 2-phase-unwind is an implementation detail, I don't see good reason why it should be in the spec.
The issue is that, that's how currently most unwinders work, it a defacto standard, and I don't think we should deny its existence. If you have a Rust panic escaping to end of stack, code without unwind metadata, or to langauges with a personality function deciding unwind must not progress further, e.g.
struct D;
impl Drop for D {
fn drop(&mut self) {
println!("Foo");
}
}
#[no_mangle]
extern "C-unwind" fn foo() {
let _d = D;
panic!();
}
void foo();
int main() {
foo();
return 0;
}
Then your destructor will not be called, because phase 1 unwind will fail and no actual cleanups are being performed at all. In a world that FFI exists, Rust does not have full control of the phase 1 unwind, and other language can influence the outcome of phase 1 unwind. Note this is a Rust panic, not a foreign exception.
One way to guarantee the behaviour that you want is to make every cleanup landing pad a (catching) handler instead. But the unwinder ABI mandates that if you report to the unwinder that you're catching an exception, you can no longer decide to resume unwinding. This means that we have to rethrow after running a destructor instead of just resume it. This is terribly inefficient.
It's not that I oppose a sane behaviour, I support it. But I don't like the argument that pretends FFI-unwind doesn't exist. We need to respect that Rust panic can traverse FFI frames and it's not for Rust to dictate what's the behaviour of Rust panic in their frames. I also disagree that Rust panic and foreign exception should have different behaviour in Rust frames. Most langauges use the same unwind framework, I think we should play by its rule.
It's also important to note that having a more "sane" behavior for unwinding is the entire point of extern "C-unwind". It exists specifically because of the problems inherent in making extern "C" behave "sensibly". If users want predictable behavior in the presence of an unwind that might cross an FFI boundary, they need extern "C-unwind"; that's what it's for.
It's also important to note that having a more "sane" behavior for unwinding is the entire point of
extern "C-unwind". It exists specifically because of the problems inherent in makingextern "C"behave "sensibly". If users want predictable behavior in the presence of an unwind that might cross an FFI boundary, they needextern "C-unwind"; that's what it's for.
That's the first time I hear this. I thought the point is that C-unwind is for when you actually want to propagate unwinding? After all in many cases you have no choice, you must use extern "C" -- namely when the other side of this API cannot deal with unwinding. Why should I not be afforded the luxury of a sane semantics in that case?
One way to guarantee the behaviour that you want is to make every cleanup landing pad a (catching) handler instead. But the unwinder ABI mandates that if you report to the unwinder that you're catching an exception, you can no longer decide to resume unwinding. This means that we have to rethrow after running a destructor instead of just resume it. This is terribly inefficient.
Which, incidentally, means that foreign exceptions entering non-trivial Rust frames is UB, because the Itanium EH Spec states it's undefined behaviour to rethrow a foreign exception.
That's the first time I hear this. I thought the point is that C-unwind is for when you actually want to propagate unwinding? After all in many cases you have no choice, you must use extern "C" -- namely when the other side of this API cannot deal with unwinding. Why should I not be afforded the luxury of a sane semantics in that case?
Your argument seems to imply that foreign frames never can deal with the unwinding, at least from a Rust panic, and that Rust frames never can deal with a foreign exception. In my mind, that invalidates the whole purpose of extern "C-unwind", which is to guarantee that any language (including C) that plays nice with the standard platform unwinder will be able to throw into Rust code, and handle panics that come from rust code accross the C-unwind boundary.
Why should I not be afforded the luxury of a sane semantics in that case?
Because, in that case, "abort" is the "sane semantics" you are afforded. Providing further guarantees is difficult in a cross-platform way and, in my mind, contrary to the point of distinguishing "C" from "C-unwind". If you want to ensure destructors are called, use "C-unwind"; if you then want an abort after all destructors have been called, either do so manually with catch_panic or mildly abuse the ABI system by wrapping a "C-unwind" function in a "C" function.
The issue is that, that's how currently most unwinders work, it a defacto standard, and I don't think we should deny its existence. If you have a Rust panic escaping to end of stack, code without unwind metadata, or to langauges with a personality function deciding unwind must not progress further, e.g.
[...]
Then your destructor will not be called, because phase 1 unwind will fail and no actual cleanups are being performed at all. In a world that FFI exists, Rust does not have full control of the phase 1 unwind, and other language can influence the outcome of phase 1 unwind. Note this is a Rust panic, not a foreign exception.
I'm okay with making concessions when foreign frames or foreign exceptions are involved. But if a panic in the "naive" semantics (or in phase 1 unwind) never "sees" a non-Rust frame, then what would it take to guarantee all destructors are run?
One way to guarantee the behaviour that you want is to make every cleanup landing pad a (catching) handler instead.
If I understand correctly this is to deal with non-Rust frames further up that turn unwinding into abort ("langauges with a personality function deciding unwind must not progress further")? Is it also needed if we exclude that case?
But your example also shows that C-unwind will have "not sane" semantics when other languages are involved? @BatmanAoD that contradicts what you said. You said
If users want predictable behavior in the presence of an unwind that might cross an FFI boundary, they need extern "C-unwind"; that's what it's for.
but I'm not actually getting predictable behavior, I'm getting "if there's some FFI frame further down the stack that doesn't like unwinding then maybe my destructors never run or maybe they will, depending on the target and optimization flags and whoknowswhatelse".
I'm okay with making concessions when foreign frames or foreign exceptions are involved. But if a panic in the "naive" semantics (or in phase 1 unwind) never "sees" a non-Rust frame, then what would it take to guarantee all destructors are run?
Making the abort lp into a Catch Handler. In every extern "C" definition.