Joss-reviews: [REVIEW]: approxposterior: A Python Implementation of “Bayesian Active Learning For Posterior Estimation”

Created on 16 Jun 2018  ·  51Comments  ·  Source: openjournals/joss-reviews

Submitting author: @dflemin3 (David Fleming)
Repository: https://github.com/dflemin3/approxposterior
Version: v0.1.post1
Editor: @jedbrown
Reviewer: @dmdu
Archive: 10.5281/zenodo.1408178

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/fa452b7562b4ce055c68d5f46a76f866"><img src="http://joss.theoj.org/papers/fa452b7562b4ce055c68d5f46a76f866/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/fa452b7562b4ce055c68d5f46a76f866/status.svg)](http://joss.theoj.org/papers/fa452b7562b4ce055c68d5f46a76f866)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@dmdu, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @jedbrown know.

Review checklist for @dmdu

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: Does the release version given match the GitHub release (v0.1.post1)?
  • [x] Authorship: Has the submitting author (@dflemin3) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

All 51 comments

Hello human, I'm @whedon. I'm here to help you with some common editorial tasks. @dmdu it looks like you're currently assigned as the reviewer for this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
Attempting PDF compilation. Reticulating splines etc...

PDF failed to compile for issue #781 with the following error:

Can't find any papers to compile :-(

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

PDF failed to compile for issue #781 with the following error:

Can't find any papers to compile :-(

@arfon The submission named version v0.1, which was released prior to submission and does not contain the paper. Can we tell whedon to look at master or another branch?

@arfon The submission named version v0.1, which was released prior to submission and does not contain the paper. Can we tell whedon to look at master or another branch?

I'm afraid we can't. I can manually compile it, or perhaps the author can make sure it's available in master?

@arfon It is in 'master' (in paper/paper.md) and was generated fine in pre-review #704. I thought the error here must have been some misconfiguration like using the v0.1 tag instead of 'master'.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

PDF failed to compile for issue #781 with the following error:

Can't find any papers to compile :-(

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Sorry for the noise here folks. Looks like there was an error with the URL at the top of the thread so @whedon couldn't figure out what/where to clone.

Tried installing the package and ran into errors. I've opened an issues at: https://github.com/dflemin3/approxposterior/issues/21

I've also noticed that the link in the review checklist at: "Repository: Is the source code for this software available at the repository " is broken. Perhaps, whedon should fix it, but I'm not sure how to request this change.

I've also noticed that the link in the review checklist at: "Repository: Is the source code for this software available at the repository " is broken. Perhaps, whedon should fix it, but I'm not sure how to request this change.

Thanks for spotting this. I've fixed up the URL.

@dflemin3 bump Do you have a time frame in which to address https://github.com/dflemin3/approxposterior/issues/21?

Hi @jedbrown sorry for the delay! I'm back in town and hope to address the installation issue in the following week.

Issue from above is addressed and closed now.

@jedbrown On version checking: at the very top of this page it says: "Version: v0.1", but the latest changes are in the most recent release, v0.1.post1. What changes are necessary here to reflect it?

@dmdu Thanks. I just edited the page to reflect that update.

@dflemin3 On your performance claims: perhaps, it would be worth it to clarify in the paper where the right subfigure from Figure 1 comes from. It looks like you compare analytical models; what empirical data can you present/reference to support these model?

@dmdu thanks for the suggestion! I updated paper.md with the following to discuss how I computed that figure and pushed the changes.

The following figure is a simple demonstration of approxposterior produced using a Jupyter Notebook provided with the code on GitHub. In the left panel, we show the true posterior probability distribution computed by Markov Chain Monte Carlo (MCMC) compared against the result of approxposterior. The two distributions are in excellent agreement. In the right panel, we estimate how the performance of approxposterior compares against MCMC by tracking the number of forward model evaluations required for both methods to converge using the fiducial parameters given in Wang2017 and by tracking the computational time required for all parts of the approxposterior algorithm, such as training the GP. Since the Wang2017 forward model is analytic and hence requires little computational effort, we estimate computational times by adopting a range of forward model runtimes, from 10 microseconds to 10,000 seconds. For MCMC, the Wang2017 example requires 400,000 iterations to converge, so we can estimate how long MCMC would take to finish for a given forward model runtime by multiplying the number of iterations by the runtime since the forward model is evaluated each iteration. For approxposterior, we adopt a similar procedure, but also add the forward model runtimes for building the GP training set, the time required to train the GP, and the time required to derive the approximate posterior distribution.

In terms of approxposterior performance, we see two regimes: for very quick forward models, the GP training time dominates performance as the GP predictions take little time, about 30 microseconds. For slow forward models, the runtime is dominated by evaluating the forward model to generate the training set for the GP. In this regime, approxposterior is several orders of magnitude faster than MCMC.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Clarification from above looks good. I also ran the Scaling_Accuracy notebook and got the same plots.

@jedbrown I completed my review. All looks good here. Let me know if you have any questions.

Looks great. Thanks, @dmdu.

@dflemin3 I noticed two minor issues in the paper. It is odd grammar to say the "code is available here [cite]" in a paper that will be read outside of the repository. Could you reword to "code is available at [cite]" or "code is available on GitHub [cite]" or similar? Also, do you know how to make that arXiv reference include the eprint number? (@arfon How should the BibTeX entry be formatted for that?)

@arfon How should the BibTeX entry be formatted for that

Does this help? https://tex.stackexchange.com/a/311325

@jedbrown thanks for catching those! I reworded the "code is..." sentence and updated the Wang+2017 citation with the NASA ADS BibTeX from http://adsabs.harvard.edu/cgi-bin/nph-bib_query?bibcode=2017arXiv170309930W&data_type=BIBTEX&db_key=PRE. The changes are up on GitHub.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@dflemin3 Hmm, ca8bce1be2a075ef4aec7915cb4213a0f4fd6c4c changed the citation key, thus breaking those citations. Fixing that still doesn't display the eprint -- seems to be a citation style issue that eprint is swallowed. I can have a look at what whedon does internally and see if I can resolve.

@arfon I found the issue: pandoc-citeproc requires the field eprinttype in order to produce a URL or any info about the eprint. My understanding is that archivePrefix is more common in the wild (e.g., https://arxiv.org/hypertex/bibstyles/), but pandoc-citeproc ignores it unconditionally.

See https://github.com/jgm/pandoc-citeproc/issues/348

@jedbrown Ah perfect, thank you! I merged the PR.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Excellent! @dflemin3 Please archive your repository on Zenodo, figshare, or the like and record the DOI here.

@arfon I found the issue: pandoc-citeproc requires the field eprinttype in order to produce a URL or any info about the eprint. My understanding is that archivePrefix is more common in the wild (e.g., https://arxiv.org/hypertex/bibstyles/), but pandoc-citeproc ignores it unconditionally.

Good to know. Thanks @jedbrown!

I archived the v0.1.post1 release on Zenodo here and here's the DOI: 10.5281/zenodo.1408178. Thanks for all your help and work during the review process!

@whedon set 10.5281/zenodo.1408178 as archive

OK. 10.5281/zenodo.1408178 is the archive.

@arfon Over to you.

@dmdu - many thanks for your review here and to @jedbrown for editing this subsmission ✨

@dflemin3 - your paper is now accepted into JOSS and your DOI is https://doi.org/10.21105/joss.00781 :zap: :rocket: :boom:

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](http://joss.theoj.org/papers/10.21105/joss.00781/status.svg)](https://doi.org/10.21105/joss.00781)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.00781">
  <img src="http://joss.theoj.org/papers/10.21105/joss.00781/status.svg" alt="DOI badge" >
</a>

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings