In #1174 I indicated I thought there was a regression in 8.1.0. After further investigation, it looks like the same issue exists in 8.0.0.
On iOS, I can consistently cause my app to crash based on setting processEveryNthFrame to a low value. The higher I set processEveryNthFrame, the longer it takes to crash (e.g. with processEveryNthFrame set in the 20-30 range, on an iPhone 6 it takes a couple minutes before the app crashes). Even on a relatively old/slow device, the Android build of the same app isn't crashing.
It feels as if there's a memory leak where old unprocessed frames are building up until the app crashes.
Ideally I think that processEveryNthFrame would represent the maximum processing rate, but if the scanning is falling behind, the processing rate should be adjusted so that the CPU is keeping up. A simple algorithm might be to keep a rolling average of how long the last N frames took, and space sending new frames for processing based on that rate. Google's tips at https://firebase.google.com/docs/ml-kit/ios/read-barcodes#performance_tips seem to align with this - they recommend dropping frames if the detector is still running.
A semaphore might also be used to put a hard cap on the number of frames being processed if concurrency/multicore is considered.
I poked around a bit in TNSMLKitCameraView and I do see a throttle counter but I'm unclear how it's being used - perhaps what I described above is already taken care of in some way.
I might be overcomplicating this - it could also just be a plain memory leak of some kind, and might have nothing to do with the detector keeping up/not keeping up.
Good observation, and good questions as well. It would be best to drop frames when the detector is busy, but at the moment it only drops frames / throttles based on the processEveryNthFrame property.
I can't guarantee there's no memory leak, but I will investigate it.
What value did you set processEveryNthFrame to initially (when it crashed rapidly)?
There's no method in the detector we can use to check if it's running, but there's a completion block we can leverage to flag the detector as complete, and not accept new frames until it is. Let me try a few things today.
Fun fact: the last addition to my MLKit implementation was custom model interpretation and I found it had a hard time processing all the frames (much worse than fi. barcode scanning), so I actually added something I described above:
Earlier when I tested: 1 was within a few seconds, 5 was within tens of seconds, 10 was under a minute, and 20 was around 2 minutes.
Now (perhaps because my phone is nice and cool after having been idle for a while?) with it set to 1 the app lasts 15 seconds before crashing, which is longer than it was lasting before.
Thanks for those details.
I've added the mentioned check to 8.1.1 (now on npm). I don't think it will solve all memory issues, but I'm curious to learn about the difference in behavior between 8.1.0 and 8.1.1...
Thanks for taking care of this so quickly. This is a massive improvement.
With 8.1.1, the only crash I'm seeing is if I dial processEveryNthFrame all the way down to 1.
On a cold phone it takes about a minute, with a warm phone the crash happens within about 25-30 seconds.
I don't see anything that would put backpressure on queue insertion, and I also don't see anything which limits the depth of the queue.
I think right now we are just getting lucky that when some frames are being dropped, the current behaviour of the scheduler is giving enough time to the thread(s) consuming queue items.