JabRef version 3.6 consumes a lot of memory under Windows 7 (more than 500 MB, see the attached screenshot) and sometimes it freezes (than it's allocating more and more, unbounded. I killed the process after 1500 MB). On Fedora 24 freezes much more often. The former versions were totally stable on Windows 7.
Thank you for your report :+1:
This should be fixed in current master. Please try the latest build from http://builds.jabref.org/master.
After installing the latest build from your link, it didn't become better. When I wanted to close it, it didn't exit, but the allocated memory was not freed (see the screenshot). I had to kill the process.
Can you give us some steps to reproduce the memory leak?
I just opened it and added an article. Here is the .bib file.
SaddlePoint.zip
Ah thanks great! We will investigate this!
500 MB Memory is normal for JabRef. Your system seems to have only 2 GB RAM.
No, my system has 16 GB of RAM.
Do you have the 64bit java runtime environment installed?
Refs #2166 #2175
I can confirm at least the large memory footprint. After opening a normal db (mine has 500 entries and a few groups) JabRef eats only 200 MB of RAM. Now open the entry editor and run through a few entries (using the arrow keys so that a new entry editor is generated for the entries). Result: over 1 GB of RAM usage. Btw: the same db with 3.6 only needed < 100 MB RAM.


As long as I keep the entry editor closed nothing happens to the RAM.
JabRef 3.7-dev--snapshot--2016-11-08--master--fffad83
windows 10 10.0 amd64
Java 1.8.0_111
I can't reproduce. For me this issue was fixed in #2175.
Is this issue valid for BibTeX or BibLaTeX mode?
Both.
Using JabRef 3.7-dev--snapshot--2016-11-08--master--fffad83, with OpenJDK 1.8.0_111 under Fedora 24, seems to have solved the problem. Doing the same procedure as @tobiasdiez , i.e.
I can confirm at least the large memory footprint. After opening a normal db (mine has 500 entries and a few groups) JabRef eats only 200 MB of RAM. Now open the entry editor and run through a few entries (using the arrow keys so that a new entry editor is generated for the entries). Result: over 1 GB of RAM usage. Btw: the same db with 3.6 only needed < 100 MB RAM.
does not increase the memory usage on Fedora.
Okay I investigated this a bit - but I'm not sure what conclusions to draw.
Have a look at the following heap diagram:

The first spikes (without marking) are produced by constantly switching from entry to entry by pressing down the cursor without releasing it - if the entry editor is closed.
Opening the entry editor and slowly switching from one entry to the next produces the blue marked footprint: More space is allocated, but quickly GCed.
Having the entry editor open and then press down the cursor without releasing it produces the footprint marked red: Due to high CPU load the newly created EntryEditors are not GCed, thus more and more memory is allocated. _(Sidenote: This is massively improved if no DatePickers are created!)_
Performing a manual GC (green) frees most allocated memory.
Thus there is no "memory leak" in the strict sense as all resources are freed eventually.
However, there is another strange effect I was not able to pin down: Without any interaction more and more heap space is used:

Which is also not a "real" memory leak as this memory also will be freed upon garbage collection.
_Note: This behavior is not new, but all JabRef version since at least 2.10 do this. But I was not able to find the reason for this behavior..._
To conclude: The large memory consumption of JabRef is not really nice, but should not be a big issue for itself. If someone is able to track down this strange constant growing of needed heap space it would be cool. However, I assume that is something strange happing in some AWT thread we cannot really solve easily...
Thus: This should not be a blocker for 3.7 - but we should consider removing those buggy DatePickers...
Ref #2176
Now that #2176 is fixed with #2340, we can close this issues as well, can't we?
After all, replacing the data picker was all we decided to do about this?
More or less, yes.
I'll check whether this is now changed with LGoodDatePicker.
No change with LGoodDatePicker: Opening an EntryEditor and than cycling through the maintable by pressing down the cursors will create massive CPU load and memory usage - which will eventually by GCed.
However, this is not the real issue here. The constantly growing used heap space (even if JabRef is just idling) will lead potentially to more and more memory consumption as the JVM increases (and decreases) the heap size after each automatic GC. And as the whole heap space is assigned to the JVM the whole size will be marked as "used" in the OS.
As already written above: I could not find the reason for the constant changes in the used heap space. Perhaps someone else is able to find the reason for this...
Ok, what a pity. To proceed, two things:
If we actually want to improve memory usage, we need to have a reproducible and easily executable benchmark that we can optimize for. Otherwise, it is hard to test a potential solution or to monitor JabRefs memory consumption over time. I guess you do not want to redo this analysis every time we try out something... @matthiasgeiger or @tobiasdiez Do you see potential for implementing a memory test with JMH that allows to reproduce the problem here?
Since this is about unused heap usage, we may play around with different command line args, like we have in the case of string deduplication. However, a reproducible benchmark we can use to assess progress (see point 1) would be beneficial first. Regarding command line args for reducing unused heap space, see http://stackoverflow.com/q/38295692/1127892 The suggestion seems to be to use -XX:+UseG1GC
So, I did some preliminary testing with the -XX:+UseG1GC arg and a very small 40-entry database: here are the results when cycling through the main table:
Current master without additional args

Current master with args:

Note that with the args the total heap is much smaller and it stays constant. I say we test this some more with a gigantic database and if it works there as well, go for it.
Documentation of the garbage collector I have set here: http://www.oracle.com/technetwork/articles/java/g1gc-1984535.html We are currently using the default, which resolves to -XX:+UseParallelGC
And another nice summary of the available garbage collectors: http://blog.takipi.com/garbage-collectors-serial-vs-parallel-vs-cms-vs-the-g1-and-whats-new-in-java-8/
Note that the G1GC comes with the string deduplication that we use anyway.
And here is some data for cycling through a 6500 entry database:
With G1GC (intial growth when I opened the database):

With the default parallel GC:

Decision at the devcall:
Thank you for reporting this issue. We think, that is already fixed in our development version and consequently the change will be included in the next release.
We would like to ask you to use a development build from https://builds.jabref.org/master and report back if it works for you.
Just to note: We switched to G1GC, which is integrated in master now.
This problem persists for me too. Windows 7 64 bit, JabRef 3.8.2 64 bit is installed using Chocolatey, as well as required jre8. No matter how big the *.bib file is, upon opening any database file JabRef quickly starts to consume RAM (I have 8 Gb) and becomes slow and unresponsive as it reaches around 1.5 Gb of memory consumption (after a minute or so). Neither resetting preferences, nor installing 32/64-bit versions of stable 3.8.2 version as well as latest snapshots (4.0.0 15-05-2017) from http://builds.jabref.org/master/ were of any help.
It's been mentioned above that 500 Mb for an average database is normal for JabRef. Well, I vastly disagree. A reference manager should not consume that much memory for what it delivers. Zotero with about 5K entries containing attachments and full-text index of all *.pdf files, notes, tags and all the bells and whistles rarely consumes more than 200 Mb of RAM on the same computer, and also practically never becomes slow or unresponsive.
P.S. I'm back to JabRef 2.10.0, this version doesn't have any of these issues with high RAM usage. I tried test database with 1K entries and it's steady at 160 Mb RAM with no freezes whatsoever.
Update: I thought that it maybe has something to do with Windows 7, so I checked whether it is the case or not once I've gotten a second laptop. So, on a freshly installed Windows 10 Pro 64 bit with the latest JRE8.0.131 and JabRef 3.8.2 -- the only two programs installed -- JabRef somehow manages to take over 1.3 Gb of RAM and makes CPU throttle after adding a single entry from DOI. I would conclude that current STABLE JabRef version is severely broken, and the upcoming dev versions are still not fixing this issue.
Most helpful comment
More or less, yes.
I'll check whether this is now changed with
LGoodDatePicker.