Joss-reviews: [REVIEW]: TEfits: Nonlinear regression for time-evolving indices

Created on 31 Jul 2020  ยท  56Comments  ยท  Source: openjournals/joss-reviews

Submitting author: @akcochrane (Aaron Cochrane)
Repository: https://github.com/akcochrane/TEfits
Version: v00.77.12
Editor: @cMadan
Reviewer: @ejhigson, @paul-buerkner
Archive: 10.5281/zenodo.3992314

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/0d67da372696cc9a817255858d8bb8a7"><img src="https://joss.theoj.org/papers/0d67da372696cc9a817255858d8bb8a7/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/0d67da372696cc9a817255858d8bb8a7/status.svg)](https://joss.theoj.org/papers/0d67da372696cc9a817255858d8bb8a7)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ejhigson & @paul-buerkner, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @cMadan know.

โœจ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest โœจ

Review checklist for @ejhigson

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@akcochrane) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @paul-buerkner

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@akcochrane) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
R accepted published recommend-accept review

All 56 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @ejhigson, @paul-buerkner it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews ๐Ÿ˜ฟ

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

I just finished my review and have the following remaining points:

  • I couldn't find any installation instructions on the README or elsewhere. Did I just overlook them? (Of course I know how to install from github but other people may not).
  • Is there any plan to publish this package on a maintained package repository such as CRAN? Of course, this it not required for the JOSS submission but more of a question I have out of interest.
  • Although the doc is generally of high quality, some doc examples are a little thin. For example, in coef.TEfit, the example only reads coef(model_fit_by_TEfit) which is not executable since model_fit_by_TEfit is not defined. I understand the thought process leading to such examples, but it would be beneficial to have examples that actually run from end to end.

@paul-buerkner Thank you for the input.

  • I've updated the README to include suggested installation instructions.

  • I may try to publish the package on CRAN at some point. I would want to re-write much of the package first, however, to make improvements to the interface and performance as well as ensure conformity to CRAN guidelines. I will not be able to manage that workload alone for the foreseeable future.

  • Thanks for pointing that out. I'll make a quick pass through the examples today, although it's still likely that I'll miss things occasionally.

Thanks! Looks good to me.

@paul-buerkner, thanks for reviewing this submission!

@ejhigson, how are things going?

Hi @cMadan - sorry for not being as quick as @paul-buerkner! I am a bit tied up right now but will ensure I review this in the next two weeks

@ejhigson, sounds good, thanks for the update!

@whedon check references

Hi @akcochrane - congratulations on a nice software package! I only have a few minor comments:

  1. It might be worth adding instructions on running the package's tests to the install section of the README.
  2. Some of the references are missing DOIs - please can you add these if they are available (no worries if not). I had thought that commenting "@whedon check references" would check your references automatically but it doesn't seem to have worked just now.
  3. I saw a couple of minor typos in the paper:
  • last sentence of first paragraph: "allows" -> "allowing".
  • first sentence of last paragraph: "as well as having results using TEfits" -> "and results using TEfits have been"?

Thank you for the input @ejhigson ! I've implemented your suggestions.

Just out of curiosity, is it typical to include instructions for tests to be included in high-level introductory documentation? In my minimal experience tests had struck me as being developer-oriented, but knowing more about user-oriented tests would probably help me write better docs and tests!

Thank you for this @akcochrane! Personally I like to include a line for how to run tests in software install documentation (you can say running tests is optional), but perhaps others have different preferences. If you want to keep the start of the README high-level and concise then I would be happy for you to include the test instructions elsewhere in the documentation instead - up to you.

Although the package seems to generally be working running the examples, I get a "no tests found" error when running the tests with test_package (below). Do you know why this is?

> testthat::test_package('TEfits')
Error: No tests found for TEfits

For reference, here are details of my session:

> sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=English_United Kingdom.1252 
[2] LC_CTYPE=English_United Kingdom.1252   
[3] LC_MONETARY=English_United Kingdom.1252
[4] LC_NUMERIC=C                           
[5] LC_TIME=English_United Kingdom.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] TEfits_00.77.07 testthat_2.3.2 

loaded via a namespace (and not attached):
[1] compiler_4.0.2 magrittr_1.5   R6_2.4.1       rlang_0.4.7   

Thanks for pointing that out. I'm not super familiar with the intricacies of devtools, but it looks like installing locally (i.e., install('dir') includes tests but installing from github (i.e., install_github('repo') can't. In that case, it looks like users might need to download the repo and use devtools::install() from there.

I'd be happy to hear other suggestions, particularly if there's an option for install_github() to include tests. I don't see anything in the install_github() docs that refers to tests, though.

If the above seems like a good way to suggest tests to users then I'll write up some instructions saying, in effect, that users will need to download/clone the repo and install locally.

(EDIT: I've implemented this suggestion near the bottom of the README)

@whedon generate pdf

Installing locally with devtools::install() from a download of the latest version of master from github makes the test run for me! Please can you note this in the documentation somewhere to avoid confusion for users who want to run the tests? I get some errors when I run the tests though due to "undefined columns selected" - do you get these too/do you know why they are occuring?

> testthat::test_package('TEfits')

Your rate is very close to the boundary. Consider penalizing the likelihood.-- 1. Error: TEfitAll runs with identity link and OLS error function (@test.chec
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:16:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:17:4
 8. base::`[.data.frame`(...)


Your rate is very close to the boundary. Consider penalizing the likelihood.-- 2. Error: TEfitAll runs with identity link and logcosh error function (@test.
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:28:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:29:4
 8. base::`[.data.frame`(...)


Your rate is very close to the boundary. Consider penalizing the likelihood.-- 3. Error: TEfitAll runs with identity link and bernoulli error function (@tes
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:41:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:42:4
 8. base::`[.data.frame`(...)

-- 4. Error: TEfitAll runs with Weibull link and OLS error function (@test.check
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:54:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:55:4
 8. base::`[.data.frame`(...)

-- 5. Error: TEfitAll runs with Weibull link and bernoulli error function (@test
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:67:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:68:4
 8. base::`[.data.frame`(...)


Warning: model did not converge at tol = 0.05 . Consider respecifying, allowing more runs, or increasing the convergence tolerance.
-- 6. Error: TEfitAll runs with logistic link and OLS error function (@test.chec
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:80:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:81:4
 8. base::`[.data.frame`(...)


Warning: model did not converge at tol = 0.05 . Consider respecifying, allowing more runs, or increasing the convergence tolerance.
-- 7. Error: TEfitAll runs with logistic link and bernoulli error function (@tes
object 'absRat' not found
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:93:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:94:4
 5. TEfits::TEfit(...)
 6. TEfits::tef_tryFits(modList, whichPnames = "null_pNames", whichFun = "null_fun")
 7. TEfits::tef_fitErr(...)
 8. [ base::eval(...) ] with 1 more call


Your rate is very close to the boundary. Consider penalizing the likelihood.-- 8. Error: TEfitAll runs with d prime link function (@test.check_TEfitAll.R#10
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:106:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:107:4
 8. base::`[.data.frame`(...)

== testthat results  ===========================================================
[ OK: 45 | SKIPPED: 0 | WARNINGS: 4 | FAILED: 8 ]
1. Error: TEfitAll runs with identity link and OLS error function (@test.check_TEfitAll.R#16) 
2. Error: TEfitAll runs with identity link and logcosh error function (@test.check_TEfitAll.R#28) 
3. Error: TEfitAll runs with identity link and bernoulli error function (@test.check_TEfitAll.R#41) 
4. Error: TEfitAll runs with Weibull link and OLS error function (@test.check_TEfitAll.R#54) 
5. Error: TEfitAll runs with Weibull link and bernoulli error function (@test.check_TEfitAll.R#67) 
6. Error: TEfitAll runs with logistic link and OLS error function (@test.check_TEfitAll.R#80) 
7. Error: TEfitAll runs with logistic link and bernoulli error function (@test.check_TEfitAll.R#93) 
8. Error: TEfitAll runs with d prime link function (@test.check_TEfitAll.R#106) 

Error: testthat unit tests failed

Here is my session info:

> sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=English_United Kingdom.1252 
[2] LC_CTYPE=English_United Kingdom.1252   
[3] LC_MONETARY=English_United Kingdom.1252
[4] LC_NUMERIC=C                           
[5] LC_TIME=English_United Kingdom.1252    

attached base packages:
[1] stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
[1] TEfits_00.77.07 testthat_2.3.2 

loaded via a namespace (and not attached):
 [1] ps_1.3.4          fansi_0.4.1       prettyunits_1.1.1 rprojroot_1.3-2  
 [5] withr_2.2.0       digest_0.6.25     crayon_1.3.4      assertthat_0.2.1 
 [9] R6_2.4.1          backports_1.1.7   magrittr_1.5      rlang_0.4.7      
[13] cli_2.0.2         fs_1.5.0          remotes_2.2.0     callr_3.4.3      
[17] ellipsis_0.3.1    desc_1.2.0        devtools_2.3.1    tools_4.0.2      
[21] glue_1.4.1        pkgload_1.1.0     compiler_4.0.2    processx_3.4.3   
[25] pkgbuild_1.1.0    sessioninfo_1.1.1 memoise_1.1.0     usethis_1.6.1    

I included the suggested instructions (to download and run locally) near the end of the README.

Would you mind trying again with the latest version (TEfits_00.77.09)? Thanks!

It appears as though the issue is a R 4.x incompatibility. I'll let you know when that's fixed.

Ok great thank you. The docs look good so once the tests are fixed I am happy to recomend this paper for publication. For reference the error messages I get with v00.77.09 look the same:

> testthat::test_package('TEfits')
-- 1. Error: TEfitAll runs with identity link and OLS error function (@test.chec
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:16:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:17:4
 8. base::`[.data.frame`(...)

-- 2. Error: TEfitAll runs with identity link and logcosh error function (@test.
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:28:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:29:4
 8. base::`[.data.frame`(...)

-- 3. Error: TEfitAll runs with identity link and bernoulli error function (@tes
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:41:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:42:4
 8. base::`[.data.frame`(...)

-- 4. Error: TEfitAll runs with Weibull link and OLS error function (@test.check
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:54:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:55:4
 8. base::`[.data.frame`(...)

-- 5. Error: TEfitAll runs with Weibull link and bernoulli error function (@test.check_TEfitAll.R#67)  ---------------------------------------------------------------
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:67:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:68:4
 8. base::`[.data.frame`(...)

-- 6. Error: TEfitAll runs with logistic link and OLS error function (@test.check_TEfitAll.R#80)  --------------------------------------------------------------------
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:80:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:81:4
 8. base::`[.data.frame`(...)

-- 7. Error: TEfitAll runs with logistic link and bernoulli error function (@test.check_TEfitAll.R#93)  --------------------------------------------------------------
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:93:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:94:4
 8. base::`[.data.frame`(...)

-- 8. Error: TEfitAll runs with d prime link function (@test.check_TEfitAll.R#106)  ----------------------------------------------------------------------------------
undefined columns selected
Backtrace:
 1. testthat::expect_is(...) tests/testthat/test.check_TEfitAll.R:106:2
 4. TEfits::TEfitAll(...) tests/testthat/test.check_TEfitAll.R:107:4
 8. base::`[.data.frame`(...)

== testthat results  =================================================================================================================================================
[ OK: 45 | SKIPPED: 0 | WARNINGS: 4 | FAILED: 8 ]
1. Error: TEfitAll runs with identity link and OLS error function (@test.check_TEfitAll.R#16) 
2. Error: TEfitAll runs with identity link and logcosh error function (@test.check_TEfitAll.R#28) 
3. Error: TEfitAll runs with identity link and bernoulli error function (@test.check_TEfitAll.R#41) 
4. Error: TEfitAll runs with Weibull link and OLS error function (@test.check_TEfitAll.R#54) 
5. Error: TEfitAll runs with Weibull link and bernoulli error function (@test.check_TEfitAll.R#67) 
6. Error: TEfitAll runs with logistic link and OLS error function (@test.check_TEfitAll.R#80) 
7. Error: TEfitAll runs with logistic link and bernoulli error function (@test.check_TEfitAll.R#93) 
8. Error: TEfitAll runs with d prime link function (@test.check_TEfitAll.R#106) 

Error: testthat unit tests failed

Session info:
```

sessionInfo()
R version 4.0.2 (2020-06-22)
Platform: x86_64-w64-mingw32/x64 (64-bit)
Running under: Windows 10 x64 (build 18363)

Matrix products: default

locale:
[1] LC_COLLATE=English_United Kingdom.1252 LC_CTYPE=English_United Kingdom.1252 LC_MONETARY=English_United Kingdom.1252 LC_NUMERIC=C
[5] LC_TIME=English_United Kingdom.1252

attached base packages:
[1] stats graphics grDevices utils datasets methods base

other attached packages:
[1] TEfits_00.77.09 testthat_2.3.2

loaded via a namespace (and not attached):
[1] ps_1.3.4 fansi_0.4.1 prettyunits_1.1.1 rprojroot_1.3-2 withr_2.2.0 digest_0.6.25 crayon_1.3.4 assertthat_0.2.1 R6_2.4.1
[10] backports_1.1.7 magrittr_1.5 rlang_0.4.7 cli_2.0.2 fs_1.5.0 remotes_2.2.0 callr_3.4.3 ellipsis_0.3.1 desc_1.2.0
[19] devtools_2.3.1 tools_4.0.2 glue_1.4.1 pkgload_1.1.0 compiler_4.0.2 processx_3.4.3 pkgbuild_1.1.0 sessioninfo_1.1.1 memoise_1.1.0
[28] usethis_1.6.1
``

Evidently I'd accidentally left a necessary dependency on a package that I'd meant to be optional. That bug should be fixed now, and the tests should run.

@akcochrane thank you very much for fixing this! The tests now all pass for me.

I am happy to recommend this paper for publication in JOSS. Congratulations on a nice software package!

@ejhigson, thank you for your thorough review!

@akcochrane, you're almost done! Next step is for me to do some final checks.

@whedon generate pdf

@whedon check references

@whedon check references

@whedon check references

Reference check summary:

OK DOIs

- 10.18637/jss.v067.i01 is OK
- 10.18637/jss.v080.i01 is OK
- 10.1167/17.11.3 is OK
- 10.1007/978-0-387-21706-2 is OK

MISSING DOIs

- None

INVALID DOIs

- 10.1167/jov.0.0.07387 is INVALID

@akcochrane, looks like that ref should have DOI of 10.1167/jov.20.8.16 and no longer "in press" (https://pubmed.ncbi.nlm.nih.gov/32790849/).

After that, there are a few last things to take care of:

  • [ ] Make a tagged release of your software, and list the version tag of the archived version here.
  • [ ] Archive the reviewed software in Zenodo
  • [ ] Check the Zenodo deposit has the correct metadata, this includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it); you may also add the authors' ORCID.
  • [ ] List the Zenodo DOI of the archived version here.

You may find this helpful: https://guides.github.com/activities/citable-code/

@whedon generate pdf

Thank you all for your attentive help during this review process!

version tag: v00.77.12

Zenodo archived doi: 10.5281/zenodo.3992314

@whedon set 10.5281/zenodo.3992314 as archive

OK. 10.5281/zenodo.3992314 is the archive.

@akcochrane I see it is in Zenodo as well, but did you mean for the version number to have to zeros before the decimal place?

@cMadan Yes, that was my intention. Would it be better for me to remove them?

@akcochrane, I'm fine with it, just thought it was a bit unconventional and thought I'd check it's not a typo.

@cMadan No, it was not a typo.

@whedon set v00.77.12 as version

OK. v00.77.12 is the version.

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary:

OK DOIs

- 10.18637/jss.v067.i01 is OK
- 10.18637/jss.v080.i01 is OK
- 10.1167/17.11.3 is OK
- 10.1007/978-0-387-21706-2 is OK
- 10.1167/jov.20.8.16 is OK

MISSING DOIs

- None

INVALID DOIs

- None

PDF failed to compile for issue #2535 with the following error:

/app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/lib/whedon/bibtex_parser.rb:45:in block in generate_citations': undefined methodkey' for # (NoMethodError)
from /app/vendor/bundle/ruby/2.4.0/gems/bibtex-ruby-5.1.4/lib/bibtex/bibliography.rb:149:in each' from /app/vendor/bundle/ruby/2.4.0/gems/bibtex-ruby-5.1.4/lib/bibtex/bibliography.rb:149:ineach'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/lib/whedon/bibtex_parser.rb:41:in generate_citations' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/lib/whedon/compilers.rb:245:incrossref_from_markdown'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/lib/whedon/compilers.rb:21:in generate_crossref' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/lib/whedon/processor.rb:95:incompile'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/bin/whedon:82:in compile' from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/command.rb:27:inrun'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in invoke_command' from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor.rb:387:indispatch'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/base.rb:466:in start' from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-0e09ec0a48e3/bin/whedon:119:in from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in load' from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in

'

@arfon, can you look into this?

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary:

OK DOIs

- 10.18637/jss.v067.i01 is OK
- 10.18637/jss.v080.i01 is OK
- 10.1167/17.11.3 is OK
- 10.1007/978-0-387-21706-2 is OK
- 10.1167/jov.20.8.16 is OK

MISSING DOIs

- None

INVALID DOIs

- None

:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1652

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1652, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@cMadan - fixed.

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

๐Ÿฆ๐Ÿฆ๐Ÿฆ ๐Ÿ‘‰ Tweet for this paper ๐Ÿ‘ˆ ๐Ÿฆ๐Ÿฆ๐Ÿฆ

๐Ÿšจ๐Ÿšจ๐Ÿšจ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! ๐Ÿšจ๐Ÿšจ๐Ÿšจ

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1659
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02535
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! ๐ŸŽ‰๐ŸŒˆ๐Ÿฆ„๐Ÿ’ƒ๐Ÿ‘ป๐Ÿค˜

    Any issues? Notify your editorial technical team...

Thank you all so much! I appreciate all of the work you do.

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02535/status.svg)](https://doi.org/10.21105/joss.02535)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02535">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02535/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02535/status.svg
   :target: https://doi.org/10.21105/joss.02535

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings