Any code that runs at_exit inside of an example raises an exception:
Cannot use at_exit from an at_exit handler (Exception)
from /usr/lib/crystal/crystal/at_exit_handlers.cr:10:5 in 'add'
from /usr/lib/crystal/kernel.cr:471:3 in 'at_exit'
from test_at_exit.cr:7:5 in 'initialize'
from test_at_exit.cr:4:3 in 'new'
from test_at_exit.cr:16:3 in '->'
from /usr/lib/crystal/spec/example.cr:255:3 in 'internal_run'
from /usr/lib/crystal/spec/example.cr:35:16 in 'run'
from /usr/lib/crystal/spec/context.cr:48:23 in 'run'
from /usr/lib/crystal/spec/dsl.cr:270:7 in '->'
from /usr/lib/crystal/crystal/at_exit_handlers.cr:255:3 in 'run'
from /usr/lib/crystal/crystal/main.cr:45:14 in 'main'
from /usr/lib/crystal/crystal/main.cr:114:3 in 'main'
from __libc_start_main
from _start
from ???
I reproduced this with
require "spec"
class Foo
def initialize
@child = Process.new("cat") # Some child or coprocess
at_exit do
unless @child.terminated?
@child.kill(Signal::KILL)
end
end
end
end
it "creates an instance of Foo" do
Foo.new.should be_a(Foo)
end
On Arch Linux, Crystal version:
Crystal 0.34.0 (2020-04-12)
LLVM: 10.0.0
Default target: x86_64-pc-linux-gnu
I wonder why we disallow adding at_exit handlers while running the at_exit handlers. I think there's nothing wrong if we allow that.
Ruby behaves exactly like that:
at_exit do
puts "Outer 1"
at_exit do
puts "Inner 1"
end
end
at_exit do
puts "Outer 2"
at_exit do
puts "Inner 2"
end
end
Output:
Outer 2
Inner 2
Outer 1
Inner 1
Despite being able to do that, it seems like a bad design if you call at_exit from code that can be reached by a spec. This hook is mostly useful for the outer application layer. Calling it somewhere else, like in an initializer seems like a code smell to me.
Even though I agree that it's a bad practice to call at_exit inside a constructor, I think calling at_exit inside at_exit should work.
I also just checked what Python does: it works exactly like in Ruby.
I'm trying to encapsulate resource cleanup inside the class. It doesn't seem like finalize gets called when exiting, is there another pattern that would be cleaner?
What does the program do in general? Not just this class
My use case is something like a terminal emulator, which spawns a child process that should not outlive the parent. I _could_ expose a do_cleanup method to be called by the main application, but I think that's an implementation choice Crystal shouldn't force me to make.
If the child process is wrapped by a class, that class should have a cleanup method to kill the child process. That's way more flexible then directly binding that to a global at_exit handler.
For example, you might at some point want respawn the child process in a different instance of the class. Not sure if that's feasible with your specific usecase, but it's an illustration of a possible extension.
If you have such a cleanup method, the application can easily make sure to call that method in an at_exit hook on the class instance.
Why force all users of a library to call at_exit when the library can do it?
@keidax I would suggest implementing a Disposable module if you need to take care of running some operation when the object is no longer used. AFAIK finalizers are not run by the gc when the process finishes. You can keep track of all disposable objects and cleanly terminate them upon one single at_exit.
If Ruby and Python allow exit handlers to be added on exit handlers, why don't we? I'm trying to find an answer to what's dangerous or bad about that. If there's really no problem, I think we should merge my PR.
@asterite My argument was directed at the specific example. It's not directly about calling at_exit from at_exit. By extension however, I don't think this should be necessary. I don't mind merging your PR though.
Personally I'd prefer to make it an error and it would be great to error at compile time (that's probably not feasible though). I don't think there's any relevant use case for defining an at_exit handler in an at_exit handler. Such handlers should typically run small cleanup actions and not do a bunch of other stuff (like adding more handlers). It disrupts control flow in the application and its super weird when exit doesn't mean exit but "do a lot more stuff now that we're exiting". Allowing to nest them just invites doing more stuff with them and that leads to bad software design.
The way spec uses it to start the spec runner is really just a hack for a very untypical use case.
Do you have any ideas about how to run specs without using an at_exit handler?
No, that's why I think it's okay to do it like that. But it's an extraordinary case.