Per the following thread: https://forum.nim-lang.org/t/2596
I have tested jester and created a route with a 9MB tar file. With Firefox and continuously opening the link, RAM will rise to around ~318MB with --gc:markAndSweep. I was able to get to use about ~562MB RAM used without specifying any gc arguments.
I get the same lib\pure\asyncdispatch.nim(198, 14) Error: undeclared identifier: 'ForeignCell' error as shawnye when compiling with --gc:boehm.
@jivank
nim --version
and
uname -a
nim --version
Nim Compiler Version 0.16.0 (2017-01-08) [Windows: i386]
Copyright (c) 2006-2017 by Andreas Rumpf
git hash: b040f74356748653dab491e0c2796549c1db4ac3
active boot switches: -d:release
I am using Windows 10 Pro 64bit. I can also see if its reproducible on OSX or Linux if needed.
nim --version information is incomplete, i dont see nim version...
@cheatfate
Sorry, I incorrectly used markdown. I fixed it now.
You told you are using Windows 10 Pro 64bit, but for some reason you have Nim 32bit version. Could you please answer me if executable you are running 32bit too?
After reviewing code, Jester don't even use asyncfile, but rosencratz is using.
Also i have found leaks in asyncfile, but this parts of code was never executed in my tests (it can leak on immediate completion of readFile operation on Windows), so I'm really need to have source to reproduce a problem.
The linked forum thread has some sample code: https://forum.nim-lang.org/t/2596.
Judging by the posts I don't think this is asyncfile related.
This forum thread is 1 year old, and talks about Nim 0.15.2.
I verified this on a linux machine.
uname -a
Linux jivan-ws 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
nim --version
Nim Compiler Version 0.16.0 (2017-01-08) [Linux: amd64]
Copyright (c) 2006-2017 by Andreas Rumpf
git hash: 5947403e84ba44397b35f93f9d327c76e794210f
active boot switches: -d:release
hello world example (will use public folder to serve file, which appears to use readFile)
# example.nim
import jester, asyncdispatch, htmlgen
routes:
get "/":
resp h1("Hello world")
runForever()
created public directory:
mkdir public
placed files:
ls -lh
total 59M
-rw-rw-r-- 1 jivan jivan 2.8M Mar 22 13:49 nim.tar.xz
-rw-rw-r-- 1 jivan jivan 12M Mar 22 14:51 test12M
-rw-rw-r-- 1 jivan jivan 45M Mar 22 14:52 test45M
(test12M and test45M are just nim.tar.xz cat'd multiple times)
I will HTTP GET the file 6 times with wget and measure the memory used, then kill the process.
for i in {0..5}; do wget -O- localhost:5000/filename > /dev/null ; done
|File| Final Memory Used|
|-----|-----|
|nim.tar.xz (2.8M)|22.3M|
|test12M (12M)|114M|
|test45M (45M)|358M|
Try nim devel please. 0.16 has a known memory manager overallocation bug.
nim --version
Nim Compiler Version 0.16.1 (2017-03-24) [Linux: amd64]
Copyright (c) 2006-2017 by Andreas Rumpf
git hash: 0d8a503e450b1d240fb5ab914f40aa30d7b6cd8b
active boot switches: -d:release
|File|Final Memory Used|
|----|------|
|nim.tar.xz (2.8M)|39.9M|
|test12M (12M)|189.9M|
|test45M (45M)|682M|
--gc:markAndSweep
|File|Final Memory Used|
|----|------|
|nim.tar.xz (2.8M)|39.6M|
|test12M (12M)|175.9M|
|test45M (45M)|536M|
--gc:boehm
|File|Final Memory Used|
|----|------|
|nim.tar.xz (2.8M)|29.1M|
|test12M (12M)|90.8M|
|test45M (45M)|438M|
If all GCs "leak", it's a logical leak, aka a stdlib bug, not a GC issue.
I've made a sample that likely demonstrates this problem:
git clone https://github.com/yglukhov/jesterleak
cd jesterleak
nim c -r test.nim
open https://localhost:5000 in browser. The script will make the requests automatically. On macos I observe memory consumption to grow rapidly to around 1.6Gb, and then stay there.
After discussion with @Araq the allocator algorithm (requestOsChunks) is under suspicion.
So it's not a stdlib bug :)
If all GCs "leak", it's a logical leak, aka a stdlib bug, not a GC issue.
It seems that the problem has not yet been solved right now.
I've try download 63MB file, using 36MB ram , then download 8MB file, it come up to 80MB.
I found this line if fileSize < 10_000_000: # 10 mb may response for that result.
Maybe changing it to 100kb will help.
This problem now is Windows specific. On the other OSes it shows stable (yet not optimal) memory behaviour.