Joss-reviews: [REVIEW]: MatSurv: Survival analysis and visualization in MATLAB

Created on 23 Oct 2019  ·  88Comments  ·  Source: openjournals/joss-reviews

Submitting author: @jhcreed (Jordan Creed)
Repository: https://github.com/aebergl/MatSurv
Version: v1.1.0
Editor: @cMadan
Reviewer: @dsurujon, @ManuelaS
Archive: 10.5281/zenodo.3632122

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/6933794bd3df998abc2a8c95cef50d03"><img src="https://joss.theoj.org/papers/6933794bd3df998abc2a8c95cef50d03/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/6933794bd3df998abc2a8c95cef50d03/status.svg)](https://joss.theoj.org/papers/6933794bd3df998abc2a8c95cef50d03)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@dsurujon & @ManuelaS, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @cMadan know.

Please try and complete your review in the next two weeks

Review checklist for @dsurujon

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@jhcreed) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @ManuelaS

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@jhcreed) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [ ] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [ ] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
accepted published recommend-accept review

Most helpful comment

@aebergl & @cMadan, the changes to the ReadMe and manuscript along with the addition of unit tests and bug fixes look good to me.
Regarding the unit tests, I am guessing the "ground truth" stats included in the mat files that the test script checks against were generated by MatSurv. I wonder (no review-breaking) whether adding some tests that compare the output of MatSurv to the statistical results from an independent software (for example the survival package from R) would help in catching potential errors in the computations arising in corner cases.

P.S. It looks like a couple of references may be missing the DOI

I am happy to recommend acceptance!

All 88 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @dsurujon, @ManuelaS it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...

PDF failed to compile for issue #1830 with the following error:

/app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-1e4ee47b240d/lib/whedon.rb:135:in check_fields': Paper YAML header is missing expected fields: date (RuntimeError) from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-1e4ee47b240d/lib/whedon.rb:87:ininitialize'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-1e4ee47b240d/lib/whedon/processor.rb:36:in new' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-1e4ee47b240d/lib/whedon/processor.rb:36:inset_paper'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-1e4ee47b240d/bin/whedon:55:in prepare' from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/command.rb:27:inrun'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in invoke_command' from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor.rb:387:indispatch'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/base.rb:466:in start' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-1e4ee47b240d/bin/whedon:116:in from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in load' from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in

'

Sorry! I have added the date and double checked that it should compile.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@jhcreed, skimming your paper I found quite a few typos ("long", "calues", and a few others). Can you and your co-authors take a closer look at the text?

We have looked back over it and fixed the typos. Thanks!

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Thank you for this opportunity to review MatSurv. I have gone through the manuscript, documentation and example usage cases. I believe this is a very useful piece of code, and works well, with plenty of options for customization of the output figures. The authors have supplied a number of reproducible examples that are very helpful as well. However, I believe the authors can better highlight the utility of MatSurv by addressing the points I raise below. Once these are addressed, I will be happy to recommend the manuscript for publication.

General checks

Authorship
I see another user @pjl54 has made a commit, but doesn't appear as an author. Depending on the extent of their contribution, I suggest including them as an author or at least mentioning them in the acknowledgements if it was a minor contribution.

Documentation

Statement of need
Please include a description of what contexts survival analysis would be useful in, and what the target audience is.

Community guidelines
Please include this section in the documentation

Software paper

Statement of need
In the summary section, I believe the authors can be more specific with some examples of when survival analysis would be applied, and what kinds of conclusions can be drawn from the output of MatSurv and similar software. I believe the Summary section could include more information on when and how survival analysis is used, since this might not be clear for a general audience.

Quality of writing

  • I found the writing difficult to follow. The first half of the Summary section and the first paragraph of the Use section are mostly definitions, which could benefit a lot from being put into context, with descriptions of why these statistics are used, and how they give us meaningful information.
  • It is not clear which statistics MatSurv outputs in the text. It appears that the "stats" output includes hazard ratios and 95% CIs calculated both with the log-rank and the Mantel-Haenszel approach. However, the section under "Use" implies only the log-rank results are returned.
  • The authors make a point about how the log-rank test "will give slightly different results when compared to the Mantel-Haneszel or Cox regression approach, which is commonly used in R.", but then show agreement between their output and the R output in Table 1. This is confusing, but can be clarified by explaining what exactly is being reported in Table 1.
  • The chi square statistic in Table 1 is not defined, and as it stands, it is not clear how this is relevant to the log-rank test

References

  • For the mentioned survival analysis approaches (log rank test, hazard ration and Kaplan-Meier curves), it would be helpful to have references.
  • The data sources (e.g. Freireich et al., 1963) should also be referenced.

Other comments

  • I highly recommend the authors explain some key concepts such as what an "event" and a "censored event" is. The risk table included in the output should also be introduced both in the paper and in the documentation.
  • In the risk table, it is not immediately clear that the columns correspond to the same time points as in the KM curve figure. I suggest adding another row (representing Time, as in ggsurvplot) as default.
  • In the documentation, I suggest describing the datasets used in more detail, as it would add to why this software is important/useful
  • While the input options are defined in the documentation, there is little information on the output (especially the stats output variable). I suggest describing each individual output statistic similar to what's included as comments in the MatSurv.m script
  • Table 1 does not have a legened, and it appears that in the "Data" column, the last row has a typo (LMAL instead of LAML)

Minor comments

@dsurujon, thank you for the thorough review!

@ManuelaS, it would be great if you could begin your review soon, thanks!

Thanks for the reminder @cMadan, I'll work through the review during the weekend!

👋 @ManuelaS - how is your review going?

Hello all, We have made the changes to MatSurv and the Documentation as suggested by @dsurujon and we are waiting for the second reviewer comments so we can finalize our edits of the article.

Apologies for the delay. I'll get to this in the next few days.

Review
Apologies for the delay to getting to this review.
I enjoyed reading through the manuscript and using MatSurv. MatSurv is well-written, easy to “install” and use, has good documentation and offers plenty of options to tailor Kaplan-Meier plots to the specific study needs. I believe MatSurv will be a useful addition for the MATLAB community.

Below some suggestions:

  • Survival analysis is applied in many fields of research and some of those communities are strong MATLAB users. I believe keeping the description of survival analysis more general and referring to the clinical setting as one of the many applications would attract a broader usership.

  • I believe it would be helpful to include in the paper that simple Kaplan-Meier plots (without risk-tables) can be drawn and Cox regression models can be fitted in MATLAB with the Statistics and Machine Learning Toolbox.

  • Include minimum required MATLAB version along with toolboxes (and when they are needed).

    • I ran into this issue when I first started reviewing it. The code checks that the MATLAB Version is
      at least 2016b, though it would be helpful for users to be made aware before downloading,
      installing and attempting to use the code.
    • While no other toolbox are needed to use MatSurv with defaults, it requires Statistics and
      Machine Learning Toolbox when setting the flag CutPoint to ‘Quartile’ or ‘Tertile’ (because it
      uses the function prctile).
  • The authors for mentioning that the statistical output from MatSurv is close, but does not match the output from R and SAS. I believe this section should be expanded with more details of why the results diverge and the impact not only on p-values, but also on hazard ratios and confidence intervals. Whether the results match or almost match outputs from “established” software is important from a user perspective and I believe this information should feature more prominently in the ReadMe as well as the paper.

  • MatSurv works well and as advertised (I used MATLAB 2019b and the latest GitHub version of the code). Though I ran into a couple of issues while I was testing it out:

    • Flag GroupOrder does not seem to work as expected (issue:

      https://github.com/aebergl/MatSurv/issues/8)

    • P-value change when setting GroupOrder in some settings (issue:

      https://github.com/aebergl/MatSurv/issues/9)

  • I did not see any unit-tests (which also be useful for contributors interested in submitting PRs).

Minor comments:

  • The documentation for the flag CutPoint is missing the option Tertile in the ReadMe
  • It would be helpful to match the color scheme and layout for the 3 subplots in the manuscript figure which compares the plotting capabilities of MatSurv with those from SAS and R.
  • A few typos I came across:

    • Typo in “suitiable” (“Why MatSurv” section , ReadMe)

    • Typo “If it is a continues variable” (documentation string in MatSurv.m)

    • Missing third person in “table that describe“ (“Summary” section in the paper)

We would like to start with thanking @dsurujon for his excellent work and insightful comments. Below you will find our changes and response to the individual comments in bold below.

General checks

_Authorship_
I see another user @pjl54 has made a commit, but doesn't appear as an author. Depending on the extent of their contribution, I suggest including them as an author or at least mentioning them in the acknowledgements if it was a minor contribution. We have now added Patrick Leo to the acknowledgements

Documentation

_Statement of need_
Please include a description of what contexts survival analysis would be useful in, and what the target audience is. This have now been added

_Community guidelines_
Please include this section in the documentation This is now included

Software paper

_Statement of need_
In the summary section, I believe the authors can be more specific with some examples of when survival analysis would be applied, and what kinds of conclusions can be drawn from the output of MatSurv and similar software. I believe the Summary section could include more information on when and how survival analysis is used, since this might not be clear for a general audience. We have rewritten this section and also added more text

_Quality of writing_

  • I found the writing difficult to follow. The first half of the Summary section and the first paragraph of the Use section are mostly definitions, which could benefit a lot from being put into context, with descriptions of why these statistics are used, and how they give us meaningful information. More text have now been added
  • It is not clear which statistics MatSurv outputs in the text. It appears that the "stats" output includes hazard ratios and 95% CIs calculated both with the log-rank and the Mantel-Haenszel approach. However, the section under "Use" implies only the log-rank results are returned. This have now been changed
  • The authors make a point about how the log-rank test "will give slightly different results when compared to the Mantel-Haneszel or Cox regression approach, which is commonly used in R.", but then show agreement between their output and the R output in Table 1. This is confusing, but can be clarified by explaining what exactly is being reported in Table 1. This has now been clarified in the article
  • The chi square statistic in Table 1 is not defined, and as it stands, it is not clear how this is relevant to the log-rank test. Chi square has now been explained

_References_

  • For the mentioned survival analysis approaches (log rank test, hazard ration and Kaplan-Meier curves), it would be helpful to have references. More references has been added
  • The data sources (e.g. Freireich et al., 1963) should also be referenced. This reference have been added

Other comments

  • I highly recommend the authors explain some key concepts such as what an "event" and a "censored event" is. The risk table included in the output should also be introduced both in the paper and in the documentation. We have added text describing this in the paper and also in the documentation
  • In the risk table, it is not immediately clear that the columns correspond to the same time points as in the KM curve figure. I suggest adding another row (representing Time, as in ggsurvplot) as default. Great comment! We have now added this as a default option and also added an option to put the risk table as a part of the KM-plot
  • In the documentation, I suggest describing the datasets used in more detail, as it would add to why this software is important/useful We have now added text describing the data better
  • While the input options are defined in the documentation, there is little information on the output (especially the stats output variable). I suggest describing each individual output statistic similar to what's included as comments in the MatSurv.m script We have now added this section in the documentation
  • Table 1 does not have a legened, and it appears that in the "Data" column, the last row has a typo (LMAL instead of LAML) Fixed

_Minor comments_

Thanks for the exellent review and great suggestions by @ManuelaS. Your comments have greatly improved the functionality of MatSurv.

Review
Apologies for the delay to getting to this review.
I enjoyed reading through the manuscript and using MatSurv. MatSurv is well-written, easy to “install” and use, has good documentation and offers plenty of options to tailor Kaplan-Meier plots to the specific study needs. I believe MatSurv will be a useful addition for the MATLAB community. Thanks!

Below some suggestions:

  • Survival analysis is applied in many fields of research and some of those communities are strong MATLAB users. I believe keeping the description of survival analysis more general and referring to the clinical setting as one of the many applications would attract a broader usership. We have added more text in the article to adress this comment

  • I believe it would be helpful to include in the paper that simple Kaplan-Meier plots (without risk-tables) can be drawn and Cox regression models can be fitted in MATLAB with the Statistics and Machine Learning Toolbox. The following text has been added to the article "The Statistics and Machine Learning Toolbox support Cox proportional hazards regression using the coxphfit function and KM-plots can be created using the plot or stairs functions."

  • Include minimum required MATLAB version along with toolboxes (and when they are needed).

    • I ran into this issue when I first started reviewing it. The code checks that the MATLAB Version is at least 2016b, though it would be helpful for users to be made aware before downloading, installing and attempting to use the code. The MATLAB Release Compatibility has been added to the top of the README file and there is also a section describing MATLAB Release Compatibility
    • While no other toolbox are needed to use MatSurv with defaults, it requires Statistics and Machine Learning Toolbox when setting the flag CutPoint to ‘Quartile’ or ‘Tertile’ (because it uses the function prctile). MatSurv is no longer dependent on prctile. If Statistics and Machine Learning Toolbox is available prctile will be used, if not, the percentiles will be calculated in a similar way
  • The authors for mentioning that the statistical output from MatSurv is close, but does not match the output from R and SAS. I believe this section should be expanded with more details of why the results diverge and the impact not only on p-values, but also on hazard ratios and confidence intervals. Whether the results match or almost match outputs from “established” software is important from a user perspective and I believe this information should feature more prominently in the ReadMe as well as the paper. This has now been clarified in the article

  • MatSurv works well and as advertised (I used MATLAB 2019b and the latest GitHub version of the code). Though I ran into a couple of issues while I was testing it out:

    • Flag GroupOrder does not seem to work as expected (issue:

      https://github.com/aebergl/MatSurv/issues/8) Fixed. It can now handle both a cell array and a scalar vector as input for GroupOrder.

    • P-value change when setting GroupOrder in some settings (issue:

      https://github.com/aebergl/MatSurv/issues/9) Fixed, It turns out that the calculations of the log rank p-value do not work when there are two or more groups with no events. We have added a error checking for the groups that warns about this condition but still displays the KM-plot. We have also added a so one can easily merge Groups with a multi-level cell structure as GroupsToUse input variable.



      • I did not see any unit-tests (which also be useful for contributors interested in submitting PRs). We have now added a test script for MatSurv



Minor comments:

  • The documentation for the flag CutPoint is missing the option Tertile in the ReadMe Fixed
  • It would be helpful to match the color scheme and layout for the 3 subplots in the manuscript figure which compares the plotting capabilities of MatSurv with those from SAS and R. Fixed
  • A few typos I came across:

    • Typo in “suitiable” (“Why MatSurv” section , ReadMe) Fixed

    • Typo “If it is a continues variable” (documentation string in MatSurv.m) Fixed

    • Missing third person in “table that describe“ (“Summary” section in the paper) Fixed

_Originally posted by @ManuelaS in https://github.com/openjournals/joss-reviews/issues/1830#issuecomment-564997236_

@cMadan We have addressed and commented on the changes that we have made to MatSurv. What should we do next?
I'm a bit new to the process, so sorry if I have missed anything.

@aebergl @jhcreed, thank you for your work on responding to the reviewers. At this stage the reviewers should look over your changes to the project and see if they are satisfied with your updates/responses.

@dsurujon @ManuelaS, it would be great if you could look over the changes the authors have made and let us know what you think!

It looks like the article text has been updated substantially. Could you please compile an updated article proof?

@cMadan
Hello again.
Is generating the pdf something we can do?

@whedon generate pdf

@aebergl, the @whedon generate pdf is something that the authors are able to do themselves, as needed.

@cMadan Thanks for you help and letting me know how to do this myself next time. The proofs looks good.

@whedon check references

Reference check summary:

OK DOIs

- 10.1182/blood.V21.6.699.699 is OK

MISSING DOIs

- https://doi.org/10.2307/2532873 may be missing for title: Survival analysis, a self‐learning text.
- https://doi.org/10.1136/bmjopen-2019-030215 may be missing for title: Proposals on Kaplan-Meier plots in medical research and a survey of stakeholder views: KMunicate

INVALID DOIs

- None

@aebergl & @cMadan, the changes to the ReadMe and manuscript along with the addition of unit tests and bug fixes look good to me.
Regarding the unit tests, I am guessing the "ground truth" stats included in the mat files that the test script checks against were generated by MatSurv. I wonder (no review-breaking) whether adding some tests that compare the output of MatSurv to the statistical results from an independent software (for example the survival package from R) would help in catching potential errors in the computations arising in corner cases.

P.S. It looks like a couple of references may be missing the DOI

I am happy to recommend acceptance!

@whedon check references

I'm sorry human, I don't understand that. You can see what commands I support by typing:

@whedon commands

@whedon check references

Reference check summary:

OK DOIs

- 10.1182/blood.V21.6.699.699 is OK

MISSING DOIs

- https://doi.org/10.2307/2532873 may be missing for title: Survival analysis, a self‐learning text.
- https://doi.org/10.1136/bmjopen-2019-030215 may be missing for title: Proposals on Kaplan-Meier plots in medical research and a survey of stakeholder views: KMunicate

INVALID DOIs

- None

@whedon check references

Reference check summary:

OK DOIs

- 10.1007/978-1-4419-6646-9 is OK
- 10.1136/bmjopen-2019-030215 is OK
- 10.1182/blood.V21.6.699.699 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon generate pdf

The proofs looks good. No edits from me

The changes on the manuscript, software and documentation look great. I am happy to recommend this updated version for publication!

@cMadan
Hello again. What is the next step?
Cheers,
/Anders

@aebergl @jhcreed, everything looks good to me!

To move forward with accepting your submission, there are a few last things to take care of:

  • [x] Make a tagged release of your software, and list the version tag of the archived version here.
  • [x] Archive the reviewed software in Zenodo
  • [x] Check the Zenodo deposit has the correct metadata, this includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it); you may also add the authors' ORCID.
  • [x] List the Zenodo DOI of the archived version here.

You may find this helpful: https://guides.github.com/activities/citable-code/

@cMadan
v1.1.0
DOI

@whedon set v1.1.0 as version

OK. v1.1.0 is the version.

@whedon set 10.5281/zenodo.3632122 as archive

OK. 10.5281/zenodo.3632122 is the archive.

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary:

OK DOIs

- 10.1007/978-1-4419-6646-9 is OK
- 10.1136/bmjopen-2019-030215 is OK
- 10.1182/blood.V21.6.699.699 is OK

MISSING DOIs

- None

INVALID DOIs

- None

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1260

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1260, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@openjournals/joss-eics, I think we're all set to accept here!

@jhcreed below are some remaining minor points in relation to the paper. You can implement them and call @whedon generate pdf here to recreate the pdf. Thanks.

  • [ ] Can you type out the state and country in your affiliation? E.g.
    Moffitt Cancer Center, Tampa, Florida, United States
  • [ ] In The methods described below was developed for..., methods is plural so use "were" i.e.:
    The methods described below were developed for
  • [ ] Change .. always should be ... to .. should always be ...
  • [ ] Would it be possible to update figure 1 to a 3x1 panel? The A and B parts appear rather small now, especially the font and line width compared to the C part of that figure. If you place A, B, and C all under each other with the same larger font size and graph edge widths (and matched axis ticks?) this would be a big improvement.

👋 @jhcreed - note that this is mostly ready to accept, but we are waiting for your actions as in the comment above

@whedon generate pdf

Please confirm here when you have addressed the actions and are ready to proceed.

The text and figure have been updated and we are ready to proceed. Thank you!

Can you reduce the size of the figures very sightly - right now, they are slightly bigger than the page (they overlap the page footer).

I'm also confused by the "(MATLAB 2019B)" in the text, which doesn't seem to match anything in the reference list. Similarly, Freireich 1963 and Ley 2013. Looking at the .md, this turns out to be because you are doing the references manually - please see the JOSS example paper and bib file and adjust your paper to correctly refer to the bib file entries - you will need to add these three additional papers to the bib file as well.

Then again let me know when you think you are ready to proceed.

The figure has been reduced so that it now fits on one page without running into the footer. We have also changed "(MATLAB 2019B)" to "[as of version MATLAB 2019B]" to clarify that we are specifying a version of software and not a reference. All of the references have been updated in the .md and .bib. We are ready to try to proceed again.

@whedon generate pdf

It still seems like Matlab 2019b should be a reference that should be in the .bib file and in the .md file, otherwise, the "MATLAB 2019B" doesn't refer to anything that the reader can find.

the first reference also needs to be fix for the first author's name

please make changes, use @whedon generate pdf to check them, and let me know when this is ready again

@whedon generate pdf

No edits from me

We are ready to try again!

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary:

OK DOIs

- 10.1007/978-1-4419-6646-9 is OK
- 10.1136/bmjopen-2019-030215 is OK
- 10.1182/blood.V21.6.699.699 is OK

MISSING DOIs

- None

INVALID DOIs

- None

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1301

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1301, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

🐦🐦🐦 👉 Tweet for this paper 👈 🐦🐦🐦

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1302
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01830
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

    Any issues? notify your editorial technical team...

👋 @openjournals/dev & @arfon - note that reference 2 in the XML has the same URL problem we've seen before the github URLs - Once this finishes processing, I will leave it open for you to manually fix this problem

Thanks to @dsurujon & @ManuelaS for reviewing!
and @cMadan for editing!

Congratulations to @jhcreed and co-authors!

Thanks everyone!
@dsurujon & @ManuelaS for providing excellent suggestions and comments.
@cMadan & @danielskatz for editing and questions.

👋 @openjournals/dev & @arfon - note that reference 2 in the XML has the same URL problem we've seen before the github URLs - Once this finishes processing, I will leave it open for you to manually fix this problem

Thanks. I've fixed this now.

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01830/status.svg)](https://doi.org/10.21105/joss.01830)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01830">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01830/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01830/status.svg
   :target: https://doi.org/10.21105/joss.01830

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings