Submitting author: @meyera (Alexander Meyer)
Repository: https://github.com/meyera/riskscorer
Version: v0.2.0
Archive: http://dx.doi.org/10.5281/zenodo.51342
Editor: @arfon
Reviewer: @maelle
Status badge code:
HTML: <a href="http://joss.theoj.org/papers/b6fca2486948af8ebd56fc31fc7277c3"><img src="http://joss.theoj.org/papers/b6fca2486948af8ebd56fc31fc7277c3/status.svg"></a>
Markdown: [](http://joss.theoj.org/papers/b6fca2486948af8ebd56fc31fc7277c3)
[ ] Archive: Does the software archive resolve?
[ ] Installation: Does installation proceed as outlined in the documentation?
[ ] Performance: Have the performance claims of the software been confirmed?
[ ] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
Compiled paper PDF: 10.21105.joss.00019.pdf
paper.md file include a list of authors with their affiliations?/ cc @openjournals/joss-reviewers - would anyone be willing to review this submission?
If you would like to review this submission then please comment on this thread so that others know you're doing a review (so as not to duplicate effort). Something as simple as :hand: I am reviewing this will suffice.
Reviewer instructions
Any questions, please ask for help by commenting on this issue! ๐
I am reviewing this, surge ahead in case of latency
So far;
@meyera
Repository is up and running, resolving to mentioned url.
Authors Affiliation and Credit: Checks out โ
Software License, MIT โ
check
References are clear enough.
Minor amendments:
You only mispelt "therefore". I'm asserting your audience is in medical or
clinical coding based on the text, this needs an explicit statement in the
abstract in the positive.
So there's a need for a easily accessible risk calculation structure,
engine or kernel to verify the math, with appropriate methods, you probably
need to have formal verification and peer review embedded in the process?
The second paragraph of the abstract is enough to work with for those with
familiarity to clinical or mission critical environments but you need to
offer more detail on the context of extensibility should you want
contributors or such without scaring anyone off yet, what features do you
want to add beyond score methods citing "extensibility"?
By simple programming interface do you mean a command line interface, a
graphic user interface with everyday WIMP or touch based icons, feel free
to elaborate if this will operate exclusively as a headless service or
using any other incarnation you deem appropriate for your evaluation and
operating environments.
Fleshing out the structure before under or over committing to needs will do
enough.
Test data parses fine by dry run reading
https://github.com/meyera/riskscorer/blob/master/README.md
Setting up my cloud for R is taking long
On 12 May 2016 17:35, "Arfon Smith" [email protected] wrote:
/ cc @openjournals/joss-reviewers
https://github.com/orgs/openjournals/teams/joss-reviewers - would
anyone be willing to review this submission?If you would like to review this submission then please comment on this
thread so that others know you're doing a review (so as not to duplicate
effort). Something as simple as :hand: I am reviewing this will suffice._Reviewer instructions_
- Please work through the checklist at the start of this issue.
- If you need any further guidance/clarification take a look at the
reviewer guidelines here
http://joss.theoj.org/about#reviewer_guidelines- Please make a publication recommendation at the end of your review
Any questions, please ask for help by commenting on this issue! ๐
โ
You are receiving this because you are on a team that was mentioned.
Reply to this email directly or view it on GitHub
https://github.com/openjournals/joss-reviews/issues/19#issuecomment-218795578
Authors Affiliation and Credit: Checks out โ
@Spencerx - feel free to check off items at the top of the issue as you go...
@Spencerx @arfon
Thank you for reviewing my submission. I am not totally sure how the processes work, that's why I just would like to ask: should I start to revise the points mentioned by @Spencerx right now and commit or wait until the review is finished and I get the official decision to revise?
Thank you very much for your effort.
Alex
should I start to revise the points mentioned by @Spencerx right now and commit or wait until the review is finished and I get the official decision to revise?
@meyera - feel free to make some changes as you go. Just let @Spencerx know that you've made them.
@meyera feel free to take your time and get it right the way you need it
On Thursday, 26 May 2016, Arfon Smith [email protected] wrote:
should I start to revise the points mentioned by @Spencerx
https://github.com/Spencerx right now and commit or wait until the
review is finished and I get the official decision to revise?@meyera https://github.com/meyera - feel free to make some changes as
you go. Just let @Spencerx https://github.com/Spencerx know that you've
made them.โ
You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/openjournals/joss-reviews/issues/19#issuecomment-221897909
Regards
[image: Spencer online]
Spencer
LinkedIn.com/in/Spencerx
@Spencerx
Please apologize my late answer.
Here are my comments and my revision based on your remarks:
You only mispelt "therefore". I'm asserting your audience is in medical or
clinical coding based on the text, this needs an explicit statement in the
abstract in the positive.
The spelling error is corrected.
So there's a need for a easily accessible risk calculation structure,
engine or kernel to verify the math, with appropriate methods, you probably
need to have formal verification and peer review embedded in the process?
The audicence are scientist and statisticians working in clinical research, The usual risk in clinical medicine are usual simple in structure, but due to the high amount of patients, the manual calculation is a hassle. Even more, there are no tools allowing for batch processing based on a database. The riskscorer package aims to simplify the automatic calculation on demand, for instance as a part of the admission process of a patient, as well as batch processing of patient records.
Verfication for the two implemented scores (EuroScore I and STS score) is accomplished as described below:
_EuroScore I:_
The score is a simple logistic regression model. I have written serveral test cases to compare the calculated score against the a known score for the specific case.
_STS:_
The STS score is more complicated than the EuroScore I. The STS provides a Web-API for riskscore processing, for which, however, no convienent wrappers exists. The risk calculation function is actually a wrapper of the STS risk calculation web service. Test cases were similarily performed as for EuroScore I.
The second paragraph of the abstract is enough to work with for those with
familiarity to clinical or mission critical environments but you need to
offer more detail on the context of extensibility should you want
contributors or such without scaring anyone off yet, what features do you
want to add beyond score methods citing "extensibility"?
You are right, extensibility is an exaggeration. In the riskscorer package case extensibility would be just the addition of another calculation function for a dedicated score.
Therefore I deleted the extensibility point in the Paper.md file.
When you refer to 'the abstract', do you mean the Paper.md file?
By simple programming interface do you mean a command line interface, a
graphic user interface with everyday WIMP or touch based icons, feel free
to elaborate if this will operate exclusively as a headless service or
using any other incarnation you deem appropriate for your evaluation and
operating environments.
By simple programming interface I mean: each score -> one function. Nothing more. The coding of the various score parameters is flexibly recognized as elaborated in the package's vignette.
This software consists of 350+ lines of code. I would like to ask the
author to confirm here that he thinks this software is a valuable
contribution to research. I want to point out that once this work has
been published it will be visible for a long time. Adding an example
of having used this tool in research would be a good idea.
@pjotrp @Spencerx @arfon
Thank you for your comment. I agree that one should ask questions whether submitted software to JOSS is a valuable contribution to research.
In regard to the number of lines of code: The _riskscorer_ package consists of several source files.
It remains unclear to me where you did count 350 lines of code. In general, however, I do think that one should not use the number of lines of code as an indicator for anything.
Regarding the added value of _riskscorer_ to science: calculating riskscores is a tedious task, which is taking a lot of time that could be used for the core research instead. Currently there are no tools that automat risk calculation of the prominent ESII or STS score available. The _riskscorer_ package provides a convenient and time saving way to batch calculate the risk scores. Moreover, the web-service ready design enables integration into the clinical IT infrastructure and workflow and makes it possible to calculate the risk score automatically right at the admission of a patient. Because of the tedious work of the risk calculation in clinical reality these are rarely manually calculated. Automating this process therefor enables the use of powerful tools such as the STS score as a clinical decision support instrument.
We already use the package in our clinical research, papers where this tool was used will be soon ready for submission to clinical journals. Currently the EurValve EU research project looks at the source code to re-implement the code in another language for their own project (http://cordis.europa.eu/project/rcn/199897_en.html).
I hope the value to clinical research that riskscorer gives is now a little clearer.
Yours,
Alex
I hope the value to clinical research that riskscorer gives is now a little clearer.
๐ thanks for this clear explanation @meyera
@Spencerx @meyera - how are we doing on this review?
@whedon assign @Spencerx as reviewer
OK, the reviewer is @Spencerx
@whedon assign @arfon as editor
OK, the editor is @arfon
@Spencerx - could you please give us an update on this review?
@whedon list reviewers
Here's the current list of JOSS reviewers: https://github.com/openjournals/joss/blob/master/docs/reviewers.csv
@meyera - it seems like @Spencerx isn't going to be able to complete this review. Could you suggest any alternative reviewers who you think might be qualified to complete this review?
@arfon - thanks for the pointer. I suggest @masalmon
Ok! I have just read the thread. I will try to do it soon.
Congratulations on creating this package. I can see how it can be useful to have the risk scorer as R functions instead of web interfaces, and your package could have a larger list of such scorers in the future. Here is my review, and I've opened several issues in the software repository itself.
[ x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
Yes but it gets an R CHECK error because apparently the license file needs to be mentioned in the DESCRIPTION.
[ x] Version: Does the release version given match the GitHub release (v0.2.0)?
[x ] Archive: Does the software archive resolve?
[ ] Installation: Does installation proceed as outlined in the documentation?
The reason I don't check this box yet is the fact that not all dependencies used in the vignette are listed in the DESCRIPTION so if one tried to install the package & build the vignette at the same time there'd be an error.
[ ] Functionality: Have the functional claims of the software been confirmed?
The reasons I leave this unchecked are 1) the examples in the vignette are not commented at all so it's hard to see if this is the expected behaviour of the functions 2) as mentioned in the issue about the vignette, it'd be nice to mention the existence of tests comparing values obtained with the package against known values.
There's a special character issue in one Rd file at least though so please check the lists of arguments.
No, and this would be really important, e.g. if someone wishes to add a score calculator to your package.
Compiled paper PDF: 10.21105.joss.00019.pdf
paper.md file include a list of authors with their affiliations?Also @arfon I'm sorry for doing the review before being officially assigned as a reviewer but I figured out I'd better do it when I had time + I guess the review can still be useful.
Also @arfon I'm sorry for doing the review before being officially assigned as a reviewer but I figured out I'd better do it when I had time + I guess the review can still be useful.
Hey no problem. Thank you so much for picking this up!
@whedon assign @masalmon as reviewer
OK, the reviewer is @masalmon
@meyera - please reply here when you've had a chance to address/respond to @masalmon's review.
@masalmon thank you very much for the fast review. I will start working on it and inform you once everything's done.
Cool, feel free to tag me here or in the repo if I can help!
Friendly reminder @meyera - how are you getting on?
@meyera cc @arfon as mentioned I can clarify my feedback and help if needed
๐ @meyera - how are you getting on?
I just emailed @meyera directly. If there's no response from him in two weeks I will reject this submission.
Makes sense
@maelle thanks for all of your help here. This submission has been rejected due to the author going AWOL.
No problem!