Joss-reviews: [REVIEW]: infotheory: A C++/Python package for multivariate information theoretic analysis

Created on 30 Jul 2019  ยท  87Comments  ยท  Source: openjournals/joss-reviews

Submitting author: @madvn (Madhavun Candadai)
Repository: https://github.com/madvn/infotheory
Version: v1.0
Editor: @poulson
Reviewers: @ajgates42, @artemyk
Archive: 10.5281/zenodo.3671994

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/66bb8ec7cb9beaf15b15f21526fe3af8"><img src="http://joss.theoj.org/papers/66bb8ec7cb9beaf15b15f21526fe3af8/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/66bb8ec7cb9beaf15b15f21526fe3af8/status.svg)](http://joss.theoj.org/papers/66bb8ec7cb9beaf15b15f21526fe3af8)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ajgates42 & @artemyk, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @poulson know.

โœจ Please try and complete your review in the next two weeks โœจ

Review checklist for @ajgates42

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: v1.0
  • [x] Authorship: Has the submitting author (@madvn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @artemyk

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: v1.0
  • [x] Authorship: Has the submitting author (@madvn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

You might want to try Dr. Artemy Kolchinsky,
https://santafe.edu/people/profile/artemy-kolchinsky
and on Github: artemyk

On Tue, Oct 22, 2019 at 2:18 PM Jack Poulson notifications@github.com
wrote:

Hi @madvn https://github.com/madvn ๐Ÿ‘‹. It seems that Ryan James (
@Autoplectic https://github.com/Autoplectic) is unresponsive and we
will need to find a new reviewer. I already exhausted our relevant reviewer
pool to get him and @ajgates42 https://github.com/ajgates42 (thank you
AJ!) -- do you have any further suggestions for a replacement reviewer?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openjournals/joss-reviews/issues/1609?email_source=notifications&email_token=AGMT33LUOPAQM4CR2ZONM5DQP47XJA5CNFSM4IHXZNR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB6WP2A#issuecomment-545089512,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGMT33LA6WL5QDXMVJJKUSLQP47XJANCNFSM4IHXZNRQ
.

--
Alexander Gates
Associate Research Scientist
Center for Complex Network Research
Northeastern University
http://alexandergates.net

All 87 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @ajgates42, @Autoplectic it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews ๐Ÿ˜ฟ

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...

Hi Alexander (@ajgates) and Ryan (@Autoplectic): It has been almost a month since the review started and it seems neither of you has commented or checked any boxes.

I fully understand that reviews are easy to keep delaying until a ping, and so I am hoping this could serve as one.

If for some reason you feel you would be unable to complete the review, please let me know and I can start the search for a replacement.

Hi @ajgates42 and @Autoplectic how are your reviews coming along?

:wave: Alexander (@ajgates42)
:wave: Ryan (@Autoplectic)

It has been about two months since the review started and we haven't heard back from you yet. Please let us know if we should find replacement reviewers. Switching out one reviewer occasionally happens, but two out of two would be problematic and embarrassing. Would you mind letting us know if you still plan to review the submission in the coming week?

Hi @madvn et al.,
Overall great package, I just have a few suggestions to improve the usability and functionality.

~ the design of the pipeline seems to focus only on continuous data. This makes its application to discrete data unnecessarily bulky (e.g. I shouldn't need to define a binning for discrete variables). Similarity when I already know the probability distribution, it would be nice to be able to pass that directly along. Can you streamline the process for these extra use cases?

~ besides dit, you should contrast your package against the other python information theory packages:
pyentropy
IDTxl
JIDT (written in java but has a python wrapper)

~ since one of your main claims is the increased efficiency over other packages, please provide a basic and advanced example that demonstrate the packages' performance.

Also, following the JOSS guidelines:
~ clearly state dependences in the installation instructions (it looks like your tests require numpy)
~ all paper reference need a DOIs

Hi @madvn! Looks like one review on your JOSS submission has been started. Will you be able to start addressing comments soon?

Hi @madvn :wave:. It seems that Ryan James (@Autoplectic) is unresponsive and we will need to find a new reviewer. I already exhausted our relevant reviewer pool to get him and @ajgates42 (thank you AJ!) -- do you have any further suggestions for a replacement reviewer?

You might want to try Dr. Artemy Kolchinsky,
https://santafe.edu/people/profile/artemy-kolchinsky
and on Github: artemyk

On Tue, Oct 22, 2019 at 2:18 PM Jack Poulson notifications@github.com
wrote:

Hi @madvn https://github.com/madvn ๐Ÿ‘‹. It seems that Ryan James (
@Autoplectic https://github.com/Autoplectic) is unresponsive and we
will need to find a new reviewer. I already exhausted our relevant reviewer
pool to get him and @ajgates42 https://github.com/ajgates42 (thank you
AJ!) -- do you have any further suggestions for a replacement reviewer?

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openjournals/joss-reviews/issues/1609?email_source=notifications&email_token=AGMT33LUOPAQM4CR2ZONM5DQP47XJA5CNFSM4IHXZNR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEB6WP2A#issuecomment-545089512,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AGMT33LA6WL5QDXMVJJKUSLQP47XJANCNFSM4IHXZNRQ
.

--
Alexander Gates
Associate Research Scientist
Center for Complex Network Research
Northeastern University
http://alexandergates.net

Hi Artemy (@artemyk) :wave:.

Would you be willing to review this information theory software project, https://github.com/madvn/infotheory, and its associated publication? You were recommended by @ajgates42 (Alexander Gates).

Hello, all. Sorry about the delay in responding.

Thanks, @ajgates42 for your reviews. I am preparing our response and will submit them in the coming days.

With regards to the second reviewer, Dr. Artemy Kolchinsky, would be great! Dr. Nicholas Timme (http://www.nicholastimme.com/ ; nmtimme on gitHub) is another option.

@poulson I could review it, but not for a couple of weeks. Please let me know what the process involves, as it would be my first time reviewing a software package.

Hi Artemy (@artemyk): thank you for being willing to spend time on this. I will reach out to Nicholas Timme as well and see if they can possibly start a bit sooner than that, but, otherwise, a couple of weeks from now would be a good backup plan.

The JOSS editors have put together a very detailed reviewer guide here:
https://joss.readthedocs.io/en/latest/reviewer_guidelines.html
Please let me know if I can answer any more detailed questions about the process. And, again, thank you.

Hi Nicholas Timme (@nmtimme) :wave:. Madhavun Candadai recommended you as a reviewer for this submission on an information theory software package. Please let me know if you would be willing and able to use your expertise to improve and review this submission.

A detailed reviewer guide can be found at: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

Hello

Thanks for contacting me, but unfortunately I won't be able to help right
now. I would suggest Joe Lizier in Sydney or Ryan James at UC Irvine. Have
a good day!

~Nick

On Wed, Oct 30, 2019 at 5:10 PM Jack Poulson notifications@github.com
wrote:

Hi Nicholas Timme (@nmtimme https://github.com/nmtimme) ๐Ÿ‘‹. Madhavun
Candadai recommended you as a reviewer for this submission on an
information theory software package. Please let me know if you would be
willing and able to use your expertise to improve and review this
submission.

A detailed reviewer guide can be found at:
https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openjournals/joss-reviews/issues/1609?email_source=notifications&email_token=AJC7CQ2YYKZIAPM2ITPA64DQRHZ3HA5CNFSM4IHXZNR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOECVY2LY#issuecomment-548113711,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJC7CQYPTXPUQAHGKEUNW43QRHZ3HANCNFSM4IHXZNRQ
.

Thank you @nmtimme -- I had reached out to both of them previously and they were unavailable. I think the best option at the moment is to wait a couple of weeks for Artemy (@artemyk) to be available. Thank you for the suggestions!

Hi @artemyk (and @poulson) โ€” is now a good time to move forward on your review?

@kthyng Please ping me in 1 month.

Hi All,

Just to be clear so no one is waiting on me, I'm not available to provide
this review. Thanks!

~Nick

On Tue, Nov 19, 2019, 3:26 PM Artemy Kolchinsky notifications@github.com
wrote:

@kthyng https://github.com/kthyng Please ping me in 1 month.

โ€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/openjournals/joss-reviews/issues/1609?email_source=notifications&email_token=AJC7CQYAKAORCXQSM5MWE5DQURDXDA5CNFSM4IHXZNR2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEEPUHWQ#issuecomment-555697114,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJC7CQ4E2BAF5G76BUW337LQURDXDANCNFSM4IHXZNRQ
.

Hi @nmtimme understood. At this point we just need one more reviewer, but thank you for clarifying!

@whedon remind @artemyk in 4 weeks

@artemyk doesn't seem to be a reviewer or author for this submission.

@whedon add @artemyk as reviewer

OK, @artemyk is now a reviewer

@whedon remove @Autoplectic as reviewer

OK, @Autoplectic is no longer a reviewer

@whedon remind @artemyk in 4 weeks

Reminder set for @artemyk in 4 weeks

To summarize for future us:
I've removed one previous reviewer and replaced with @artemyk who plans to review in 4 weeks.

:wave: @artemyk, please update us on how your review is going.

Hello @whedon. I have completed my review. Apologies for the delay.

First, I think this is a very welcome package that could be useful to a lot of different research communities.

I have two comments:

1) I agree with all of the issues raised by @ajgates42 .

2) It seems to me that the paper and documentation omits an important piece of information: how is mutual information actually estimated? Is it by binning, some other method (there is a brief citation to Scott 1985), or something else? It is known that estimating mutual information from a finite sample is a difficult statistical problem, and that most simple estimators are biased (see e.g. Paninski, "Estimation of Entropy and Mutual Information", Neural Computation, 2003). I think its important that the authors (a) prominently state the statistical estimator of mutual information that is being used, (b) explain (even if briefly) how it works (c) explain the bias properties of this estimator, (d) demonstrate that the code correctly implements the estimator by running it on samples (of various size) from distributions with a known entropy, and joint distributions with known MI, and comparing to numerical results. Ideally, (d) would also be implemented in the automated tests of the package.

Thank you, @ajgates42 and @artemyk . I will respond to all your comments in the coming days.

@whedon generate pdf

@whedon generate pdf

Hi @madvn et al.,
Overall great package, I just have a few suggestions to improve the usability and functionality.

Hi @ajgates42,
Thank you for review comments. Please find responses below and the updated pdf version of the paper here.

~ the design of the pipeline seems to focus only on continuous data. This makes its application to discrete data unnecessarily bulky (e.g. I shouldn't need to define a binning for discrete variables). Similarity when I already know the probability distribution, it would be nice to be able to pass that directly along. Can you streamline the process for these extra use cases?

It is indeed true that the focus of this package is on continuous variables. This was deliberate since there are other existing packages that focus on discrete data (e.g., dit). As a result, using this package for discrete data is only done as if it is a special case of continuous data. Similarly, this package is designed for use with large/complex data whose distribution is not known a priori. We have added line 58 in the paper to more clearly state the intended use of this package. As it is, the package needs a complete revamp to include these two additional features. If it's okay, we would like to leave extensions to include them for future releases.

~ besides dit, you should contrast your package against the other python information theory packages:
pyentropy
IDTxl
JIDT (written in java but has a python wrapper)

We have included a new short paragraph in the paper at line 57 to contrast our package in relation to others.

~ since one of your main claims is the increased efficiency over other packages, please provide a basic and advanced example that demonstrate the packages' performance.

There are two contexts in this paper where we have mentioned efficiency of the package.
In the first context, we make a general claim with regards to programming languages. We have noted that although this package can be used from Python it is written in C++ which is known to be faster in comparison to Python.
In the second, we point to the computational efficiency of average shifted histograms based on which we have designed a sparse data-structure for efficiency.
Neither of these are claims about the relative computational speed in relation to other packages.
However, because the data is encoded in a sparse data structure, the package scales efficiently with the number of data points and the dimensionality of the dataset.
To illustrate this, we have added data_scaling.py and dim_scaling.py and their associated results in the benchmarks folder of the repo, showing that once the data points have been added, estimating informational quantities take relatively the same amount of time.

Also, following the JOSS guidelines:
~ clearly state dependences in the installation instructions (it looks like your tests require numpy)

Added sub-section on dependencies to README here

~ all paper reference need a DOIs

Done for all papers except ones on Arxiv that do not have a DOI.

First, I think this is a very welcome package that could be useful to a lot of different research communities.

Hi @artemyk ,
Thank you for your comments. Please find responses below and the updated pdf version of the paper here.

I have two comments:

  1. I agree with all of the issues raised by @ajgates42 .
  1. It seems to me that the paper and documentation omits an important piece of information: how is mutual information actually estimated? Is it by binning, some other method (there is a brief citation to Scott 1985), or something else? It is known that estimating mutual information from a finite sample is a difficult statistical problem, and that most simple estimators are biased (see e.g. Paninski, "Estimation of Entropy and Mutual Information", Neural Computation, 2003). I think its important that the authors
    (a) prominently state the statistical estimator of mutual information that is being used,
    (b) explain (even if briefly) how it works
    (c) explain the bias properties of this estimator,

We have used Average Shifted Histograms as our estimator. As suggested, we have included a new paragraph (line 55) that briefly describes the estimator. Additionally, we have provided general rules for setting up the estimator and have pointed the readers to resources about density estimation and biases.

(d) demonstrate that the code correctly implements the estimator by running it on samples (of various size) from distributions with a known entropy, and joint distributions with known MI, and comparing to numerical results.
Ideally, (d) would also be implemented in the automated tests of the package.

Yes, we agree that this is important to demonstrate. The current version of the website and the repository includes these demos. Specifically, we calculate mutual information on three different samples: mutual information on two coupled uniform distributions, two coupled uniform distributions with noise drawn from a Gaussian with 0 mean and standard deviation 0.02, and two independent uniformly random distributions. The results are as expected and are described here. The first case with identical random variables is also included as part of the automated tests for the package.

@kthyng @madvn
The authors have addressed my concerns. Thanks, -a

:wave: Hey @artemyk...

Letting you know, @kthyng is currently OOO until Thursday, January 23rd 2020. :heart:

@kthyng @madvn
The authors have addressed my concerns. Thanks, -a

Thank you, @artemyk, for the very helpful feedback.

:wave: Hey @madvn...

Letting you know, @kthyng is currently OOO until Thursday, January 23rd 2020. :heart:

@poulson Can help shepherd this forward. Nice work everyone!

Hi @ajgates42 -- thank you for all of your volunteering on this so far.

It seems you still haven't checked the following boxes: functionality, performance, installation, community guidelines, authors statement, statement of need, and references.

Do you have lingering concerns on any of these issues? If so, would you mind articulating them?

@ajgates42 Just a gentle reminder to review our responses to your comments. Thank you!

@kthyng @madvn
The authors have addressed most of my concerns. Should be a good package for applications of information theory.

Thank you, @ajgates42 , for the very helpful feedback!

@poulson Please let us know how to proceed. Thank you!

@whedon generate pdf

@whedon check references

Reference check summary:

OK DOIs

- 10.1002/j.1538-7305.1948.tb01338.x is OK
- 10.1002/047174882X is OK
- 10.3390/e19100531 is OK
- 10.1214/aos/1176349654 is OK
- 10.1007/978-3-642-53734-9_6 is OK
- 10.3390/e16042161 is OK
- 10.1088/1751-8121/aaed53 is OK
- 10.1103/PhysRevLett.85.461 is OK
- 10.1007/978-3-642-54474-3_1 is OK
- 10.21105/joss.00738 is OK
- 10.21105/joss.01081 is OK
- 10.1371/journal.pone.0140397 is OK
- 10.1111/cogs.12142 is OK
- 10.2417/1200906.1663 is OK
- 10.3389/frobt.2014.00011 is OK
- 10.1162/089976603321780272 is OK
- 10.1080/01621459.1987.10478550 is OK
- 10.1201/b14876 is OK
- 10.1093/biomet/66.3.605 is OK
- 10.1007/978-3-642-21551-3_19 is OK
- 10.1080/01621459.1926.10502161 is OK

MISSING DOIs

- None

INVALID DOIs

- 10.1016/j.jhydrol.2008.10.019v is INVALID

@madvn -- would you mind fixing the one invalid DOI and then registering this package on zenodo and providing the link?

Hi @poulson ,

I've fixed the invalid DOI and have registered with Zenodo at this doi - https://doi.org/10.5281/zenodo.3671994

Please let me know if there is anything else I need to do. Thanks!

@whedon check references

Reference check summary:

OK DOIs

- 10.1002/j.1538-7305.1948.tb01338.x is OK
- 10.1002/047174882X is OK
- 10.3390/e19100531 is OK
- 10.1214/aos/1176349654 is OK
- 10.1007/978-3-642-53734-9_6 is OK
- 10.3390/e16042161 is OK
- 10.1088/1751-8121/aaed53 is OK
- 10.1103/PhysRevLett.85.461 is OK
- 10.1007/978-3-642-54474-3_1 is OK
- 10.21105/joss.00738 is OK
- 10.21105/joss.01081 is OK
- 10.1371/journal.pone.0140397 is OK
- 10.1111/cogs.12142 is OK
- 10.2417/1200906.1663 is OK
- 10.3389/frobt.2014.00011 is OK
- 10.1162/089976603321780272 is OK
- 10.1080/01621459.1987.10478550 is OK
- 10.1201/b14876 is OK
- 10.1016/j.jhydrol.2008.10.019 is OK
- 10.1093/biomet/66.3.605 is OK
- 10.1007/978-3-642-21551-3_19 is OK
- 10.1080/01621459.1926.10502161 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@whedon generate pdf

Hi @poulson,

Just a gentle reminder to take a look at this. Do let me know if there is anything I need to do. Thank you!

Hi @poulson and @kthyng,

I really appreciate all your work in finding reviewers and getting this paper to this stage. It would be great if this can be officially accepted and published in the coming days. Please let me know if there is anything I can do to help.

Thank you!

@poulson can you help move this along :point_up: is this ready for acceptance in JOSS?

I apologize for the delay on this, @madvn. Thank you for filling in the missing DOI. I will move forward with it right now.

OK. 10.5281/zenodo.3671994 is the archive.

@whedon set v1.0 as version

OK. v1.0 is the version.

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary:

OK DOIs

- 10.1002/j.1538-7305.1948.tb01338.x is OK
- 10.1002/047174882X is OK
- 10.3390/e19100531 is OK
- 10.1214/aos/1176349654 is OK
- 10.1007/978-3-642-53734-9_6 is OK
- 10.3390/e16042161 is OK
- 10.1088/1751-8121/aaed53 is OK
- 10.1103/PhysRevLett.85.461 is OK
- 10.1007/978-3-642-54474-3_1 is OK
- 10.21105/joss.00738 is OK
- 10.21105/joss.01081 is OK
- 10.1371/journal.pone.0140397 is OK
- 10.1111/cogs.12142 is OK
- 10.2417/1200906.1663 is OK
- 10.3389/frobt.2014.00011 is OK
- 10.1162/089976603321780272 is OK
- 10.1080/01621459.1987.10478550 is OK
- 10.1201/b14876 is OK
- 10.1016/j.jhydrol.2008.10.019 is OK
- 10.1093/biomet/66.3.605 is OK
- 10.1007/978-3-642-21551-3_19 is OK
- 10.1080/01621459.1926.10502161 is OK

MISSING DOIs

- None

INVALID DOIs

- None

:wave: @openjournals/joss-eics, this paper is ready to be accepted and published

. Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1361

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1361, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

The final proof looks great to me, thank you!

@whedon accept

Attempting dry run of processing paper acceptance...

Hi @danielskatz,

Thank you for those edits. Pull requests merged!

Best,
Madhavun

Reference check summary:

OK DOIs

- 10.1002/j.1538-7305.1948.tb01338.x is OK
- 10.1002/047174882X is OK
- 10.3390/e19100531 is OK
- 10.1214/aos/1176349654 is OK
- 10.1007/978-3-642-53734-9_6 is OK
- 10.3390/e16042161 is OK
- 10.1088/1751-8121/aaed53 is OK
- 10.1103/PhysRevLett.85.461 is OK
- 10.1007/978-3-642-54474-3_1 is OK
- 10.21105/joss.00738 is OK
- 10.21105/joss.01081 is OK
- 10.1371/journal.pone.0140397 is OK
- 10.1111/cogs.12142 is OK
- 10.2417/1200906.1663 is OK
- 10.3389/frobt.2014.00011 is OK
- 10.1162/089976603321780272 is OK
- 10.1080/01621459.1987.10478550 is OK
- 10.1201/b14876 is OK
- 10.1016/j.jhydrol.2008.10.019 is OK
- 10.1093/biomet/66.3.605 is OK
- 10.1007/978-3-642-21551-3_19 is OK
- 10.1080/01621459.1926.10502161 is OK

MISSING DOIs

- None

INVALID DOIs

- None

:wave: @openjournals/joss-eics, this paper is ready to be accepted and published

. Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1362

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1362, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

๐Ÿฆ๐Ÿฆ๐Ÿฆ ๐Ÿ‘‰ Tweet for this paper ๐Ÿ‘ˆ ๐Ÿฆ๐Ÿฆ๐Ÿฆ

๐Ÿšจ๐Ÿšจ๐Ÿšจ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! ๐Ÿšจ๐Ÿšจ๐Ÿšจ

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1363
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01609
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! ๐ŸŽ‰๐ŸŒˆ๐Ÿฆ„๐Ÿ’ƒ๐Ÿ‘ป๐Ÿค˜

    Any issues? notify your editorial technical team...

Awesome! ๐ŸŽ‰ ๐ŸŽ‰

Thank you @poulson and @kthyng for your continuous support through this process! Thank you @ajgates42 and @artemyk for your review comments that helped improve the paper and the package overall, and thank you @danielskatz for the final edits!!

:wave: Hey @madvn...

Letting you know, @kthyng is currently OOO until Sunday, March 15th 2020. :heart:

At least for me, the paper itself is not appearing (https://www.theoj.org/joss-papers/joss.01609/10.21105.joss.01609.pdf) - once it does, I will close this issue

Thanks to @poulson for editing and @ajgates42 and @artemyk for reviewing!

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01609/status.svg)](https://doi.org/10.21105/joss.01609)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01609">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01609/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01609/status.svg
   :target: https://doi.org/10.21105/joss.01609

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings

Related issues

whedon picture whedon  ยท  10Comments

whedon picture whedon  ยท  6Comments

whedon picture whedon  ยท  12Comments

whedon picture whedon  ยท  10Comments

whedon picture whedon  ยท  9Comments