Joss-reviews: [REVIEW]: bayestestR: Describing Effects and their Uncertainty, Existence and Significance within the Bayesian Framework

Created on 2 Jul 2019  Β·  57Comments  Β·  Source: openjournals/joss-reviews

Submitting author: @DominiqueMakowski (Dominique Makowski)
Repository: https://github.com/easystats/bayestestR
Version: 0.2.5
Editor: @cMadan
Reviewer: @paul-buerkner, @tjmahr
Archive: 10.5281/zenodo.3361605

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/1d180e6004a0dd1e6b235eb24fe66276"><img src="http://joss.theoj.org/papers/1d180e6004a0dd1e6b235eb24fe66276/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/1d180e6004a0dd1e6b235eb24fe66276/status.svg)](http://joss.theoj.org/papers/1d180e6004a0dd1e6b235eb24fe66276)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@paul-buerkner & @tjmahr, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @cMadan know.

✨ Please try and complete your review in the next two weeks ✨

Review checklist for @paul-buerkner

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: 0.2.5
  • [x] Authorship: Has the submitting author (@DominiqueMakowski) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [ ] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @tjmahr

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: 0.2.5
  • [x] Authorship: Has the submitting author (@DominiqueMakowski) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

We have added a paragraph to explain the figure more in detail:

The lollipops represent the density of a point-null on the prior distribution (the blue lollipop on the dotted distribution) and on the posterior distribution (the red lollipop on the yellow distribution). The ratio between the two - the Svage-Dickey ratio - indicates the degree by which the mass of the parameter distribution has shifted away from or closer to the null.

Just fix the typo in Savage-Dickey, and I'm satisfied.

All 57 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @paul-buerkner, @tjmahr it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
Attempting PDF compilation. Reticulating splines etc...

I have just finished my review and have very few minor comments.

  • All the citations in the software paper should list a DOI (if they have one) as per reviewer checklist above.
  • The definition of the maximum a-posteriori value as "the most probable value" is not entirely correct for continuous parameters. Instead it is the value with the highest density (which still have probability zero for continuous parameters).
  • you seem to use the abbreviation MAP both for the maximum a posteriori and the maximum a-priori value. This will likely confuse readers.

Thank you for the thorough review, @paul-buerkner!

Dear @paul-buerkner, thanks a lot for your comments! We addressed them in this PR:

Reviewer 1 (@paul-buerkner)

  • [x] All the citations in the software paper should list a DOI (if they have one) as per reviewer checklist above.
  • Added DOIs for all refs but the following (none was found):

    • see package (here)
    • rstanarm package
    • BayesFactor package
    • Mill's "Objective Bayesian Precise Hypothesis Testing"
    • Multiple Comparisons with BayesFactor, Part 2 (Morey's blog, 2015)
    • Practical bayesian optimization of machine learning algorithms (Snoek's proceedings, 2012)
    • Jeffrey's Theory of Probability book
  • [x] The definition of the maximum a-posteriori value as "the most probable value" is not entirely correct for continuous parameters. Instead it is the value with the highest density (which still have probability zero for continuous parameters).
  • We changed its definition to the following:

"find the Highest Maximum A Posteriori (MAP) estimate of a posterior, _i.e._, the value associated with the highest probability density (the "peak" of the posterior distribution). In other words, it is an estimation of the _mode_ for continuous parameters."

  • [x] you seem to use the abbreviation MAP both for the maximum a posteriori and the maximum a-priori value. This will likely confuse readers.
  • This was likely an error and was addressed by replacing instances of the latter by the former (the maximum a posteriori).

Please note that there are still some references for which we did not find a DOI: we continue our search in the meantime. We hope you will find the revised version satisfying ☺️

Looks good to me.

@tjmahr, are you still able to review this submission?

@paul-buerkner, thanks again!!

I would like to review it but won't be able to look at it until next week.

@tjmahr, no problem, thanks for following up!

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Please take me off of the list of reviewers.Β I have received over a dozen e-mails on four projects today.
The first project, yesterday, generated ~30 e-mails to me, most of them not understandable.IMHO, your process for reviewing needs serious work.
Sincerely,
Barry
On Thursday, July 25, 2019, 6:38:25 AM EDT, whedon notifications@github.com wrote:

πŸ‘‰ Check article proof πŸ“„ πŸ‘ˆ

β€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub, or mute the thread.

@BarryDeCicco, you should unsubscribe from the GitHub notifications for this repository, or otherwise change your notification settings. See the second post (https://github.com/openjournals/joss-reviews/issues/1541#issuecomment-507707086) in this review thread (same information is available in every review thread).

I worked with version 0.2.4, the most recent on the GitHub repository, although the version mentioned in the checklist is 0.2.3

Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

I do not see anything for contribution guidelines. I would add a CONTRIBUTING.md file.

Comments

The amount of documentation to support the package is very generous. For my review, however, I focused on the README and the software paper.

This package borrows a lot from the Kruschke school of Bayesian inference. HDI and ROPE are a distinct feature of his tutorials and textbook; one does not see them very often in works by, say, Stan developers. Therefore, this package is tremendously useful for those reading his tutorials, or for people like me, who occasionally will quantify an effect with a ROPE percentage or who want to learn about using Bayes factors.

The ROPE procedure and other indices use the highest density interval. Is there any option to use an equal-tailed interval?

README

I was confused by the README. When I see R code followed immediately by a plot, I assume that the R code produced the plot. But the functions in the README produce text output (which is not included in the README) and they do not produce plots. I would include the text output of the R code. I would also note that the figures there are diagrams meant to illustrate the statistical concept. The software paper does a good job of making this point clear.

I don't see a demo for eti().

Moreover, 89 is the highest prime number that does not exceed the already unstable 95% threshold (McElreath, 2015).

The primeness of 89 is not important. McElreath's choice of 89 in Statistical Rethinking text was to illustrate that interval widths are arbitrary and that there is nothing special about 95 or 90 compared to 89.

equivalence_test() a Test for Practical Equivalence based on the

Needs a verb.

I don't understand the Bayes Factor diagram in the README.

a range of -0.05 to -0.05.

This range is the same number twice.

Savage-Dickey density ratio is computed

Should this have a reference?

Probability of a Value

density_at() isn't doing computing a probability. I would remove estimate_probability() and probability_at() because they are just aliases for density functions and density is the more appropriate term.

I don't see a demo for the area under the curve functions.

Documentation

A ROPE-based p of 97% means that there is a probability of .97 that a parameter (described by its posterior distribution) is outside the ROPE. On the contrary, a ROPE-based p of -97% means that there is a probability of .97 that the parameter is inside the ROPE. (R/p_rope.R)

I don't understand how a p-value can get a negative percentage. What would a 0% p-value mean? If this index doesn't act like a familiar p-value, it is probably the wrong name for it.

Software paper

The first mention of bayestestR in the second paragraph is awkward. Specifically, the text shifts from talking about common ways to describe effects in a Bayesian framework to talking about the features of the package:

Additionally, bayestestR also focuses on implementing a Bayesian null-hypothesis testing framework ...

It's great that the output of point_estimate() prints out mean/median/map to make it clear what value is being used.

However, bayestestR functions also include plotting capabilities via the see package (LΓΌdecke, Waggoner, Ben-Shachar, & Makowski, 2019).

I don't see any plotting examples in the README or documentation pages. I see plotting methods in the NAMESPACE.

I think it would worthwhile to demonstrate that the functions demoed in the article also work on models. For example, I can call p_direction() and bayesfactor_parameters() directly on a model and get the results for each parameter. One of the key contributions of this package is that it can make these indices immediately available to users who are comfortable with brms and rstanarm.

Proofreading concerns

Every reference of Kruschke spells out the author's full name.

Figure 2 should be referenced in the text.

(i.e. the difference

Needs a comma.

The Bayesian framework allows to neatly delineate

Allows one.

developped

Nevertheless, in the absence of user-provided values, bayestestR will automatically find an appropriate range

Nevertheless doesn't make sense.

bases on prior and posterior samples

Based.

The system for building the references section should protect some words from being converted to lowercase. (In LaTeX, this is done with {}). Right now, for example, it says Brms: An r package for bayesian multilevel models using stan but I would make sure that the system produces brms: An R package for Bayesian multilevel models using Stan.

Dear @tjmahr, thanks a lot for your thorough review. We have addressed them in this PR:

Features

  • [x] The ROPE procedure and other indices use the highest density interval. Is there any option to use an equal-tailed interval?

We added a ci_method argument in rope() to allow for ETI to be used.

  • [x] density_at() isn't doing computing a probability. I would remove estimate_probability() and probability_at() because they are just aliases for density functions and density is the more appropriate term.

We removed the two aliases with _probability_. We also clarified in the documentation that it is pertaining to the value of the density function.

README

  • [x] I do not see anything for contribution guidelines. I would add a CONTRIBUTING.md file.

We added a contributing file.

  • [x] I would include the text output of the R code [in the README].

We have additionally included the text output from the R Code in the README.

  • [x] I would also note that the figures there are diagrams meant to illustrate the statistical concept.

We have added a sentence to point out that these figures are meant to illustrate the statistical concepts, and pointed the readers to the see-package, where plotting-methods are provided:

"The following figures are meant to illustrate the (statistical) concepts behind the functions. However, for most functions, plot()-methods are available from the see-package."

  • [x] I don't see a demo for eti().

We have added a demo for eti() to the README.

  • [x] _"Moreover, 89 is the highest prime number that does not exceed the already unstable 95% threshold (McElreath, 2015)."_ The primeness of 89 is not important. McElreath's choice of 89 in Statistical Rethinking text was to illustrate that interval widths are arbitrary and that there is nothing special about 95 or 90 compared to 89.

We have rephrased the sentence to emphasize the idea behind choosing the 89 as CI-level:

"Moreover, 89 indicates the arbitrariness of interval limits - its only remarkable property is being the highest prime number that does not exceed the already unstable 95% threshold (McElreath, 2015)"

Furthermore, although already implied in the paper, we also emphasized the point of arbitrariness on the paper as well.

  • [x] _"equivalence_test() a Test for Practical Equivalence based on the"_ Needs a verb.

We added a verb to the sentence:

equivalence_test() is a Test for Practical Equivalence based on the...

  • [x] I don't understand the Bayes Factor diagram in the README.

We have added a paragraph to explain the figure more in detail:

The lollipops represent the density of a point-null on the prior distribution (the blue lollipop on the dotted distribution) and on the posterior distribution (the red lollipop on the yellow distribution). The ratio between the two - the Svage-Dickey ratio - indicates the degree by which the mass of the parameter distribution has shifted away from or closer to the null.

  • [x] _"a range of -0.05 to -0.05."_ This range is the same number twice.

Thanks, we fixed the typo!

  • [x] _"Savage-Dickey density ratio is computed"_ Should this have a reference?

Thanks, we have added a reference, and furthermore added a reference-list to the end of the README.

  • [x] I don't see a demo for the area under the curve functions.

See comment from TJ below, no longer necessary.

Documentation

  • [x] _ROPE-based p documentation_: I don't understand how a p-value can get a negative percentage. What would a 0% p-value mean? If this index doesn't act like a familiar p-value, it is probably the wrong name for it.

We have clarified the documentation of this index and underlined its exploratory nature. We also made clear that the negative sign reflects the direction of the index (wether in corresponds to significance or non-significance), rather than actual negative probabilities, which indeed make no sense.

The ROPE-based \emph{p}-value is an exploratory and non-validated index representing the 
maximum percentage of \link[=hdi]{HDI} that does not contain (or is entirely contained, in which 
case the value is prefixed with a negative sign), in the negligible values space defined by the 
\link[=rope]{ROPE}. It differs from the ROPE percentage, \emph{i.e.}, from the proportion of a given
 CI in the ROPE, as it represents the maximum CI values needed to reach a ROPE proportion of 0\% 
or 100\%. Whether the index reflects the ROPE reaching 0\% or 100\% is indicated through the sign:
 a negative sign is added to indicate that the probability corresponds to the probability of a not 
significant effect (a percentage in ROPE of 100\%). For instance, a ROPE-based \emph{p} of 97\% 
means that there is a probability of .97 that a parameter (described by its posterior distribution) is 
outside the ROPE. In other words, the 97\% HDI is the maximum HDI level for which the percentage 
in ROPE is 0\%. On the contrary, a ROPE-based p of -97\% indicates that there is a probability of .97 
that the parameter is inside the ROPE (percentage in ROPE of 100\%). A value close to 0\% would 
indicate that the mode of the distribution falls perfectly at the edge of the ROPE, in which case the 
percentage of HDI needed to be on either side of the ROPE becomes infinitely small. Negative values 
do not refer to negative values \emph{per se}, simply indicating that the value corresponds to non-
significance rather than significance.

Paper

  • [x] The first mention of bayestestR in the second paragraph is awkward. Specifically, the text shifts from talking about common ways to describe effects in a Bayesian framework to talking about the features of the package: _"Additionally, bayestestR also focuses on implementing a Bayesian null-hypothesis testing framework ..."_
  • [x] _"However, bayestestR functions also include plotting capabilities via the see package (LΓΌdecke, Waggoner, Ben-Shachar, & Makowski, 2019)."_: I don't see any plotting examples in the README or documentation pages. I see plotting methods in the NAMESPACE.
  • [x] I think it would worthwhile to demonstrate that the functions demoed in the article also work on models. For example, I can call p_direction() and bayesfactor_parameters() directly on a model and get the results for each parameter.

Proofreading

  • [x] Every reference of Kruschke spells out the author's full name.

Hopefully fixed (changed the name in the .bib file). However, I am not sure why would that happen. One possible reason is disambiguation, yet all instances were written the same way...

  • [x] Figure 2 should be referenced in the text.
  • [x] _"(i.e. the difference"_: Needs a comma.
  • [x] _"The Bayesian framework allows to neatly delineate"_: Allows one.
  • [x] _developped_
  • [x] _Nevertheless, in the absence of user-provided values, bayestestR will automatically find an appropriate range_: Nevertheless doesn't make sense.
  • [x] _bases on prior and posterior samples_: Based
  • [x] The system for building the references section should protect some words from being converted to lowercase. (In LaTeX, this is done with {}). Right now, for example, it says Brms: An r package for bayesian multilevel models using stan but I would make sure that the system produces brms: An R package for Bayesian multilevel models using Stan.

Typos have been fixed.

We hope you will be satisfied with the revisions ☺️

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

We have added a paragraph to explain the figure more in detail:

The lollipops represent the density of a point-null on the prior distribution (the blue lollipop on the dotted distribution) and on the posterior distribution (the red lollipop on the yellow distribution). The ratio between the two - the Svage-Dickey ratio - indicates the degree by which the mass of the parameter distribution has shifted away from or closer to the null.

Just fix the typo in Savage-Dickey, and I'm satisfied.

Also, thanks, in particular, for adding the ETI functionality for the ROPE methods.

typo has been fixed!

Thanks for the thorough review, @tjmahr!

@DominiqueMakowski, it looks like there are no outstanding issues, is that correct?

@cMadan That is correct ☺️

@DominiqueMakowski, great!!

To move forward with accepting your submission, there are a few last things to take care of:

  • [x] Make a tagged release of your software, and list the version tag of the archived version here.
  • [x] Archive the reviewed software in Zenodo
  • [x] Check the Zenodo deposit has the correct metadata, this includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it); you may also add the authors' ORCID.
  • [x] List the Zenodo DOI of the archived version here.

You may find this helpful: https://guides.github.com/activities/citable-code/

@cMadan we have tagged a v0.2.5 release and a Zenodo identifier ☺️

DOI

@whedon set 0.2.5 as version

OK. 0.2.5 is the version.

@whedon set 10.5281/zenodo.3361605 as archive

OK. 10.5281/zenodo.3361605 is the archive.

@DominiqueMakowski, perfect, thank you!

@openjournals/joss-eics, I think we're all set to accept here!

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Thanks to @paul-buerkner and @tjmahr for reviewing and @cMadan for editing

πŸ‘‹ @DominiqueMakowski - please see https://github.com/easystats/bayestestR/pull/217 and merge it - also carefully check the rest of the bib to make sure I didn't miss anything else (e.g., words in lower case that should be in upper case, odd periods at the end of titles, etc.)

@danielskatz Thanks for the thorough reading! I have read the paper and checked all hyperlinks, everything looks good so far.

I'll go through the paper and check the references now.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

In addition, here are some changes for the paper. (in a PR that I forgot to add but has now been merged) :)

Thanks for the language editing! i have gone through the references and found some minor changes. I will hand over to @DominiqueMakowski for the final check.

Ok - please let me know when you & @DominiqueMakowski are done, then we can proceed.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@danielskatz Thanks a lot for your changes!
@cMadan I think we are good to go ☺️

@whedon accept

Attempting dry run of processing paper acceptance...

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/901

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/901, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

🐦🐦🐦 πŸ‘‰ Tweet for this paper πŸ‘ˆ 🐦🐦🐦

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/902
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01541
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! πŸŽ‰πŸŒˆπŸ¦„πŸ’ƒπŸ‘»πŸ€˜

    Any issues? notify your editorial technical team...

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01541/status.svg)](https://doi.org/10.21105/joss.01541)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01541">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01541/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01541/status.svg
   :target: https://doi.org/10.21105/joss.01541

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@cMadan @danielskatz @paul-buerkner @tjmahr Thanks a lot again for your time and contributions! 😍

Was this page helpful?
0 / 5 - 0 ratings