This code shows a much bigger getOccupiedMem() value with arc/orc than with refc. Very likely to be related to the memory leak by async with ORC (e.g. see https://forum.nim-lang.org/t/6584)
import asyncdispatch
const
# Just to occupy some RAM
BigData = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
proc doNothing(): Future[void] {.async.} =
discard
proc main(): Future[void] {.async.} =
for x in 0 .. 100_000:
await doNothing()
waitFor main()
GC_fullCollect()
echo getOccupiedMem()
type
FutureBase = ref object
finished: bool
someData: string
const
# Just to occupy some RAM
BigData = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
proc main(): FutureBase =
var retFuture = FutureBase()
iterator mainIter(): FutureBase {.closure.} =
for x in 0 .. 100_000:
var internalTmpFuture = FutureBase(finished: true, someData: BigData)
yield internalTmpFuture
retFuture.finished = true
var nameIterVar = mainIter
proc mainNimAsyncContinue() {.closure.} =
var next = nameIterVar()
while not isNil(next) and next.finished:
next = nameIterVar()
if not isNil(next):
doAssert next.someData.len == 97
# GC_unref(next)
# If you uncomment the GC_ref above,
# the program basically uses no memory after the run.
# but crashes with refc, which might indicate
# that arc/orc simply never frees the result of "next"?
if finished(nameIterVar):
break
mainNimAsyncContinue()
return retFuture
discard main()
GC_fullCollect()
echo getOccupiedMem()
With refc: 62048 (varies a bit)
With orc: 4800048
With orc + GC_unref: 96
Roughly the same occupied mem as with refc after GC_fullCollect()
$ nim -v
Nim Compiler Version 1.3.5 [Linux: amd64]
Compiled at 2020-07-25
Copyright (c) 2006-2020 by Andreas Rumpf
git hash: d5627b3e9c11d9e4f0eb6406dbd1c6c38a1bff9b
active boot switches: -d:release -d:danger
Checking how much times the destroy of FutureBase is called with a custom destructor apparently gives us the fact that it's only called _once_ (and we have 100 thousand iterations where we create it!). When you uncomment GC_unref it actually starts being called for each iteration like it should.
Nim code
type
FutureBaseObj = object
finished: bool
someData: string
FutureBase = ref FutureBaseObj
proc `=destroy`(x: var FutureBaseObj) =
echo "destroy called once?"
`=destroy`(x.finished)
`=destroy`(x.someData)
const
# Just to occupy some RAM
BigData = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
proc main(): FutureBase =
var retFuture = FutureBase()
iterator mainIter(): FutureBase {.closure.} =
for x in 0 .. 100_000:
var internalTmpFuture = FutureBase(finished: true, someData: BigData)
yield internalTmpFuture
retFuture.finished = true
var nameIterVar = mainIter
proc mainNimAsyncContinue() {.closure.} =
var next = nameIterVar()
while not isNil(next) and next.finished:
next = nameIterVar()
if not isNil(next):
doAssert next.someData.len == 97
# GC_unref(next)
# If you uncomment the GC_ref above,
# the program basically uses no memory after the run.
# but crashes with refc, which might indicate
# that arc/orc simply never frees the result of "next"?
if finished(nameIterVar):
break
mainNimAsyncContinue()
return retFuture
discard main()
GC_fullCollect()
echo getOccupiedMem()
expandArc of mainNimAsyncContinue
var next
try:
next = :envP.nameIterVar2()
block :tmp:
while not isNil(next) and next.finished:
`=sink`(next, :envP.nameIterVar2())
if not isNil(next):
const
expr`gensym4807169 = "next.someData.len == 97"
const
loc`gensym4810005 = (filename: "b.nim", line: 31, column: 17)
ploc`gensym4810006 = "b.nim(31, 18)"
{.line: (filename: "b.nim", line: 31, column: 17).}:
if not (len(next.someData) == 97):
failedAssertImpl("b.nim(31, 18) `next.someData.len == 97` ")
if finished_1(:envP.nameIterVar2):
break :tmp
finally:
`=destroy`(next)
Further simplification, only one closure iterator left:
type
FutureBase = ref object
someData: string
const
# Just to occupy some RAM
BigData = "aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa"
iterator mainIter(): FutureBase {.closure.} =
for x in 0 .. 100_000:
var internalTmpFuture = FutureBase(someData: BigData)
yield internalTmpFuture
proc main() =
var nameIterVar = mainIter
var next = nameIterVar()
while not isNil(next):
next = nameIterVar()
if not isNil(next):
doAssert next.someData.len == 97
# GC_unref(next)
# If you uncomment the GC_ref above,
# the program basically uses no memory after the run.
# but crashes with refc, which might indicate
# that arc/orc simply never frees the result of "next"?
if finished(nameIterVar):
break
main()
GC_fullCollect()
echo getOccupiedMem()
Another example (for when using async without -d:release)
type
FutureBase = ref object of RootObj ## Untyped future
finished: bool
stackTrace: seq[StackTraceEntry]
Future[T] = ref object of FutureBase
proc newFuture(): owned(Future[void]) =
new(result)
result.finished = false
result.stackTrace = getStackTraceEntries()
proc complete(future: Future[void]) =
future.finished = true
proc doNothing(): Future[void] =
var retFuture = newFuture()
iterator doNothingIter(): FutureBase {.closure.} =
complete(retFuture)
var nameIterVar = doNothingIter
proc doNothingNimAsyncContinue() {.closure.} =
complete(retFuture)
var next = nameIterVar() # need that line
doNothingNimAsyncContinue()
return retFuture
proc main(): Future[void] =
var retFuture = newFuture()
iterator mainIter(): FutureBase {.closure.} =
for x in 0 .. 5_000:
var internalTmpFuture: FutureBase = doNothing()
yield internalTmpFuture
complete(retFuture)
var nameIterVar = mainIter
proc mainNimAsyncContinue() {.closure.} =
if not finished(nameIterVar):
var next = nameIterVar()
while not isNil(next) and next.finished:
next = nameIterVar()
if finished(nameIterVar):
break
mainNimAsyncContinue()
return retFuture
discard main()
GC_fullCollect()
echo getAllocStats()
With nim -v 2a8bf470af995f4e7d5bb8a0f49bb2226f95c311 I still get what appear to be memory leaks with ORC and async by monitoring two processes, one built with REFC the other with ORC. After 12 hours of running, the REFC proc's RES tops at 8720 while ORC keeps growing, and its latest number is 163892.
It's a simple continuous GET request, with an intermittent pinging healthcheck optional, which I use to get a report if the proc crashes.
import
httpClient,
asyncDispatch,
asyncFutures,
osproc,
times
template asyncHttp(body: untyped): untyped =
let client {.inject.} = newAsyncHttpClient()
defer: client.close
body
proc reqGet(
url: string
) {.async.} =
asyncHttp:
discard await client.get url
proc main(
cached = false,
drainDuration: int = 2,
healthCheckUrl = "",
healthCheckInterval = 30,
url: string = "https://reqres.in/api/users"
): int =
result = 1
var hcTimer = now() - healthCheckInterval.seconds
while true:
try:
asyncCheck reqGet url
if (now() - hcTimer).inSeconds.int > healthCheckInterval:
if healthCheckUrl.len > 0:
asyncCheck reqGet healthCheckUrl
hcTimer = now()
GC_fullCollect()
echo getOccupiedMem()
drain(1000 * drainDuration)
except:
echo "Main err: "
echo getCurrentExceptionMsg()
echo getStackTrace()
when isMainModule:
quit main()
OccupiedMem for REFC reports values:
608816
392288
398768
392608
608688
392288
608816
While ORC is just zeroes now, with previous builds it wasn't. Maybe getAllocStats is what should be used in the case of ORC I presume?
@D-Nice I know there are still some kind of leaks (I tried with asynchttpserver for example), but they require more investigation to try to minimize like I did in this issue. async libs count in a lot of lines of code, so if you can, please try to minimize the code to trigger the leak :)
New test program that nicely shows the problem:
import deques, strutils, asyncdispatch
proc doNothing(): Future[void] {.async.} =
var q = initDeque[string]()
for i in 0..1000:
q.addLast($i)
proc main(): Future[void] {.async.} =
for x in 0 .. 1_000:
await doNothing()
for i in 0..20:
waitFor main()
GC_fullCollect()
echo formatSize getOccupiedMem()
Still getting ORC isolated leaks with devel 1.5.1 833035bacd91c2cf16ea006b64bf996e87eac75d
Simplest example I could make is with sleepAsync, I am getting similar (worse) leaks when using just asyncHttpClient as well.
leaky
import asyncdispatch
var futures: seq[Future[void]]
for i in 0 .. 20:
defer:
futures.setLen 0
GC_fullCollect()
echo getOccupiedMem()
futures.add sleepAsync 5
futures.add sleepAsync 2
futures.all.waitFor
No leaks with just one instantiation
no leak
import asyncdispatch
var futures: seq[Future[void]]
for i in 0 .. 20:
defer:
futures.setLen 0
GC_fullCollect()
echo getOccupiedMem()
futures.add sleepAsync 5
futures.all.waitFor
asyncHttpClient leaks with ORC for me regardless of number of clients, however, I'd prefer to see this fixed before investigating that further, as maybe this fixes that issue too.
@D-Nice yeah, found the same with -d:useMalloc -

the one in the middle is the actual leak it seems
The code to repro is (taken from the top comment and changed a bit):
# -d:danger --gc:orc -d:useMalloc
import asyncdispatch
var futures: seq[Future[void]]
for i in 0 .. 20:
futures.add sleepAsync 5
futures.add sleepAsync 2
futures.all.waitFor()
futures.setLen 0
GC_fullCollect()
Maybe we should add an "ORC" label to the issue tracker?
The ARC label is fine.
Most helpful comment
New test program that nicely shows the problem: