JabRef version 3.7 on Ubuntu 14.04 (openjdk version "1.8.0_111")
Using a medline search corresponding to more than 50 references only imports last 50 references. There is no warning that the search will only import 50 references.
Steps to reproduce:
@zellerdev @tobiasdiez Did this happen because of https://github.com/JabRef/jabref/pull/2066? Do you think, its possible to bring the old behavior back?
The limit also causes headaches for StevenM: http://discourse.jabref.org/t/jabref-3-7-web-fetching-limitations-and-added-steps-to-do-simple-things/338
I am so sorry for StevenM ;-)
Wasn't the limit introduced to avoid IP blocking by pubmed? If so, it will be hard to circumvent. The best solution that comes to my mind in this case would be to make the limit configurable, so that a user can increase it to an arbitrary number, if he/she wants to.
@koppor
In #2066 I rewrote the medline fetcher to use the new infrastructure. I set the amount of fetched entries to the 50 most relevant entries. By default the pubmed site would generate 20 entries with another criteria.
Before I rewrote it, a dialog would popup and asked for the amount of entries to fetch. But back then the gui code and logic code was in the same class.
Hi @vchouraki!
Let me clarify first that JabRef is not intended to be a tool for mass download of citations. The purpose of the WebFetchers (such as the Medline Fetcher) is to simplify download of single, or at least few entries without using the browser. I.e., you try to import the bibliographic information of already known publications in a simple way.
However, it is still possible to import hundreds or even thousands of entries from medline using the export functionality of the database itself. Perfom the search query you like, and then choose the "Send to" -> "File" export (choose Medline or XML as format):

The downloaded file can then be imported using JabRefs "File" -> "Import into current/new database" feature. (Note: depending on the number the import might require some - or quite a lot of time ;-) I just tried it with an exported XML file of 130MB an over 11000 found entries which required more than 10 minutes of import on my machine while I was writing this reply :wink:
I think the answer by @matthiasgeiger is a sufficiently good explanation of why it makes sense to mark this issue as won't fix and close.
However, there is one thing we (and by we I mean @matthiasgeiger ) can do: Since similar questions have popped up a number of times, maybe you could add your answer to the help pages for fetchers in general?
Pffffff.... :wink:
I copied (and slightly adapted) @matthiasgeiger's text at http://help.jabref.org/en/Medline#mass-downloading-of-articles
@vchouraki Would that work for you?
@koppor, @matthiasgeiger, thanks for pointing to #2066 and the discourse thread. Maybe I did not look carefully enough in the changelogs. But apart from #2066 and the discourse thread, this was not advertised anywhere else and came as a surprise when I first noticed.
I used PMIDs only to illustrate the 50 entries limit. I used to fetch Medline entries directly from Jabref, using natural language queries and I never considered Jabref as a tool of mass download of citation. Nevertheless, I would not call a query that retrieves ~100 results "mass download". The solution proposed by @matthiasgeiger is okay with me but it seems to defeat the purpose of the fetcher, which is to "Search the Web" as advertised as the first feature of Jabref on jabref.org. Since I started this thread, I rapidly found and installed the Jabfox extension, which is okay, but requires both Firefox and Zotero. I would rather have one tool to manage and extend my bibliography.
Wouldn't it be possible to reintroduce the old behaviour of a popup warning of the high number of entries, letting the user decide whether he|she wants to fetch everything?
I seconds @vchouraki, I was also so surprised by the new behaviour. I use Jabref to create my own database of my research's field, and with this new behavior is become mostly impossible with the fetcher. First, by the number, but also because the sorting is set to "relevance" instead of date.
I tried xml import as mentioned, but I found it rather slow for 400 items compared to fetcher (my feeling). But I like the fact it imported "journal-abreviation" ;-)
So I guess I could change my workflow, but I'm agree it should be advertised somehow.
We currently have no money to pay someone for that. Thus, it depends on the availability of our free time. We unfortunately have plenty of other open issues there: https://github.com/JabRef/help.jabref.org/issues/
To follow up on @koppor: Although the feature can be implemented, there is currently no one in the development team volunteering for doing so.
We would be very happy if someone who wants the feature would be willing to implement it!
We are currently trying to focus on other things. :fire:
We think about supporting that feature for all fetchers in general. But, we currently have no time. Feel free to reopen in case you know someone having the time to implement that.
Most helpful comment
I think the answer by @matthiasgeiger is a sufficiently good explanation of why it makes sense to mark this issue as won't fix and close.
However, there is one thing we (and by we I mean @matthiasgeiger ) can do: Since similar questions have popped up a number of times, maybe you could add your answer to the help pages for fetchers in general?