Submitting author: @lvermue (Laurent Vermue)
Repository: https://github.com/DTUComputeStatisticsAndDataAnalysis/MBPLS
Version: v1.0.0
Editor: @brainstorm
Reviewer: @arokem
Archive: 10.5281/zenodo.2560303
Status badge code:
HTML: <a href="http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c"><img src="http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c/status.svg"></a>
Markdown: [](http://joss.theoj.org/papers/864e8fb9bc214f14b878c6c559e3031c)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
@arokem, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @brainstorm know.
โจ Please try and complete your review in the next two weeks โจ
paper.md file include a list of authors with their affiliations?Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @arokem it looks like you're currently assigned as the reviewer for this paper :tada:.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews ๐ฟ
To fix this do the following two things:


For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
@lvermue: first of all, kudos on a nice and useful software package!
Could you please share the code that you used to generate the results in the "Benchmark" section of your paper? Thanks!
I have looked over all of the review criteria (also shown above) and posted issues to your repository marked with "[JOSS review]". I also added some other issues, that are not required for the review of your paper, but I still think would be good additions or changes ๐
@arokem Thank you for your comments and useful suggestions!๐
We have worked on all issues and feel confident that we have resolved them.
Regarding your previous comment:
Could you please share the code that you used to generate the results in the "Benchmark" section of your paper? Thanks!
The zip-file attached to this comment contains three different python scripts, e.g. for testing run-times for fixed row size with increasing column size, fixed column size with increasing row size as well as a symmetric increase of both row and column sizes as reported in the paper.
The scripts also contain the tests for the Ade4-package, which was run through rpy2. We also performed various tests with pure R scripts, but could not detect any difference in performance nor in the times recorded.
The redis redlock module was used to let 4 identical servers coordinate and work on the test script simultaneously, i.e. each server knows what the others have done and are doing at the moment, so the server can pick a remaining task not handled by other servers.๐ค
runtime_analysis.zip
Hi @lvermue : nice work on these revisions. Most of my comments are addressed.
Regarding the performance benchmark: any reason not to add these to the repo of your software? Or, barring that, put that on another publicly available software that you could refer to in your manuscript? That would improve the reproducibility of these benchmark results (I am already running into some issues running the scripts...)
Hi @arokem
The initial reason not to add the runtime analysis scripts to our repo was that they were not considered as a part of the software itself. However, as they are a part of the paper I agree with you and have now added them as a subdirectory called 'benchmark' within the paper directory, including a small README showing the requirements to run the tests. Furthermore, the scripts were cleaned up and changed to run on one node only, not requiring the Redis lock setup for distributed calculations.
See https://github.com/DTUComputeStatisticsAndDataAnalysis/MBPLS/commit/13007ac8d19612ac06731b95c686fc56b916d7c4
@brainstorm: I am ready to check that last box, and from my point of view this paper is ready to be accepted for publication.
@lvermue : Thanks for adding the code. I think that it benefits the paper greatly.
One small thing: I don't think that this generates the plots that appear in the paper. I realize that this is just one small additional step, but I think that it would be helpful for readers to see how you would get from your benchmark code to these plots. But take that as a recommendation, rather than as a requirement for my review of the paper.
Thanks @arokem! I would also like to see those plots in the paper itself :)
@whedon check references
Attempting to check references...
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
@lvermue Please correct the missing/invalid DOIs.
@brainstorm @arokem I just updated the runtime analysis scripts. They now contain the code that was used to create the plots shown in the paper. See
https://github.com/DTUComputeStatisticsAndDataAnalysis/MBPLS/commit/abc9bf5771216a2a89bf4354d5f7f9f395d755f4
Furthermore, I had a look at the missing/invalid DOIs.
Missing DOI
I could not find any DOI for the paper "Scikit-learn: Machine learning in Python" and the suggested one is definitely wrong. Is there another way to resolve this issue?
Invalid DOI
I have triple checked this DOI and cannot see what should be wrong with it. Any idea on how to solve fix one? :)
@lvermue I suspect it has to do with "special characters" in the URL, @arfon, did you encounter those errors in the whedon DOI checker parser before?
Scikit-learn: Machine learning in Python
@lvermue - the checks done by @whedon should be considered _suggestions_ not definite errors so please ignore this one.
http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S
Yes, this looks like a bug in the DOI checking code. I'll fix this now but as you say @lvermue - http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S is fine.
@whedon check references
Attempting to check references...
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
OK, @whedon now recognizes http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-S as a valid DOI.
Thanks @arfon, it should be good to accept/deposit afaict, thanks for the revisions on the paper @lvermue and the reviewing efforts @arokem!
@lvermue Can you please provide a zenodo DOI so we can save it as archive?
@brainstorm @arokem Thank you for the swift and constructive review process!
@arfon Thanks for the super fast fix!
Here the zenodo DOI:
https://doi.org/10.5281/zenodo.2560303
@whedon set https://doi.org/10.5281/zenodo.2560303 as archive
OK. 10.5281/zenodo.2560303 is the archive.
@whedon accept
Attempting dry run of processing paper acceptance...
PDF failed to compile for issue #1190 with the following error:
/app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in block in find': No such file or directory - tmp/1190 (Errno::ENOENT)
from /app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:incollect!'
from /app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in find'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-dc9ad3c41cc6/lib/whedon/processor.rb:57:infind_paper_paths'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-dc9ad3c41cc6/bin/whedon:70:in compile'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/command.rb:27:inrun'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in invoke_command'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor.rb:387:indispatch'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/base.rb:466:in start'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-dc9ad3c41cc6/bin/whedon:113:inload'
from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@whedon accept
Attempting dry run of processing paper acceptance...
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/479
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/479, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true
@arokem @brainstorm - do these look OK to you โ๏ธ?
All good on my side, ready to be deposited! @arokem gave the thumbs up earlier in the thread so I assume it's all good too from the reviewers' side too.
@whedon accept deposit=true
Doing it live! Attempting automated processing of paper acceptance...
๐จ๐จ๐จ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! ๐จ๐จ๐จ
Here's what you must now do:
Party like you just published a paper! ๐๐๐ฆ๐๐ป๐ค
Any issues? notify your editorial technical team...
@arokem - many thanks for your review and to @brainstorm for editing this submission โจ
@lvermue - your paper is now accepted into JOSS :zap::rocket::boom:
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[](https://doi.org/10.21105/joss.01190)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01190">
<img src="http://joss.theoj.org/papers/10.21105/joss.01190/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.01190/status.svg
:target: https://doi.org/10.21105/joss.01190
This is how it will look in your documentation:
We need your help!
Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
Most helpful comment
OK, @whedon now recognizes
http://doi.org/10.1002/(SICI)1099-128X(199809/10)12:5<301::AID-CEM515>3.0.CO;2-Sas a valid DOI.