Describe the problem
I have a cluster which has several failed and canceled import jobs. The data seems to still be present even days after the failure. There does not seem to be any gc job associated with these failures. I suspect that this is fallout of the migration to use a job to clean up the data rather than the schema changer picking it up due to the dropped state of the table.
To Reproduce
Start a large import, cancel it or somehow force it to fail (kill some nodes?), observe that there's no job to clean up the data.
As discussed, this also is the case for RESTORE.
This also seems to imply that we need to create a job for any dropped table that doesn't have a running job associated with it, as part of #46504.
One difference with these is that they don鈥檛 need to wait for the GC TTL. They should probably just go straight to the status where the table is dropping.
The way we handled this in IMPORT is by setting the DropTime of the table to 1, so that it is GC'd immediately.
Fixed by #46727 and #46766.