Joss-reviews: [REVIEW]: Rule-based Integration -- An extensive system of symbolic integration rules

Created on 9 Nov 2018  Β·  41Comments  Β·  Source: openjournals/joss-reviews

Submitting author: @halirutan (Patrick Scheibe)
Repository: https://github.com/RuleBasedIntegration/JOSS-Publication
Version: 4.16.0.4
Editor: @danielskatz
Reviewer: @acolum, @rljacobson
Archive: 10.5281/zenodo.2234522

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/2a2415a7eb8bf7cc903d8469e09c2b8c"><img src="http://joss.theoj.org/papers/2a2415a7eb8bf7cc903d8469e09c2b8c/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/2a2415a7eb8bf7cc903d8469e09c2b8c/status.svg)](http://joss.theoj.org/papers/2a2415a7eb8bf7cc903d8469e09c2b8c)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@acolum & @rljacobson, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @danielskatz know.

✨ Please try and complete your review in the next two weeks ✨

Review checklist for @acolum

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: Does the release version given match the GitHub release (4.16.0.4)?
  • [x] Authorship: Has the submitting author (@halirutan) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @rljacobson

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [X] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: Does the release version given match the GitHub release (4.16.0.4)?
  • [x] Authorship: Has the submitting author (@halirutan) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [X] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [X] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [X] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

@rljacobson - If your review takes a significant amount of time, that's fine, as long as you are willing to do so.

Note that reviewing _Performance_ and _Automated Tests_ can be a review of Are there such things and Do they do what they claim, rather than Are they 100% complete, though you are of course able to suggest improvements. In some sense, you have to balance the amount of code review and spot checks that you do that increase your understanding of the software and its claims with the time you can spend on this.

All 41 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @acolum, it looks like you're currently assigned as the reviewer for this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
Attempting PDF compilation. Reticulating splines etc...

:wave: @acolum and @rljacobson - please go ahead and get started. Read the opening comment in this issue carefully, and also take a look at the JOSS reviewer guidance. In brief, you perform your review and check the boxes above when you are satisfied with an item, and open issues in the [paper repository]. (https://github.com/RuleBasedIntegration/JOSS-Publication) or code repository as needed.

Note that JOSS normally assumes the paper is in the same repository as the code, so the repository link in the compiled paper (see the comment above, or a later one if changes are made and the paper is recompiled) will have to be changed by the JOSS editors manually before final acceptance.

If you have any questions, please let me know.

@halirutan @danielskatz Just to clarify, my understanding is that the authors are intending to include in their submission the entire Rubi ecosystem, at least as it relates to its Mathematica implementation, which would include all of the following:

  1. The https://github.com/RuleBasedIntegration/Rubi repository
  2. The Mathematica test suite, https://github.com/RuleBasedIntegration/MathematicaSyntaxTestSuite
  3. The documentation, which presently lives primarily at https://rulebasedintegration.org.
  4. The catalog of integration rules, which is housed here: https://github.com/RuleBasedIntegration/IntegrationRules

Is this correct?

If this submission is only a subset of the items I list above, I strongly recommend that the authors revise the submission to include (at least) everything in that list, as those items are all quite intertwined and would make a much more substantial contribution and impact to the field if the submission successfully passes review.

@rljacobson You are correct. I had a bit of a struggle while preparing the manuscript, as JOSS seems to be tailored to the "one repository" software written in an open language like python. Rubi is a bit different because the general idea of putting together a large set of integration rules that allow for symbolic integration is at least as important as its implementation. So yes, the Rubi ecosystem consists of the

  • integration rules that are available as Mathematica notebooks and rendered PDF files
  • it's implementation in a Mathematica package which takes care of setting up the integration rules and wraps everything in a nice user-interface.
  • Rubi's test-suite with 70k+ integrals and solution for several CAS languages
  • High-level user documentation, installation guide, links to CAS comparisons, and other information on rulebasedintegration.org

Every little part of this is MIT-licenced and freely available. For the JOSS publication, we wanted to present the Mathematica implementation as it is a real program like the other publications on JOSS.
Since we linked in its README.md to the Rubi website, it should be sufficient for the interested reader to get all the information.

Nevertheless, in the summary manuscript, we tried to present the whole idea and explain important parts of the interconnections. Therefore, the outline reads as

  • The general importance of symbolic integrators
  • Rubi and its development as Mathematica package with integration rules that are repeatedly applied to find antiderivatives
  • Details about features and the rigorous test-suite
  • Other software packages, that already use Rubi as an integrator (which proves the point of being "general" as none of these projects use Mathematica)

I strongly recommend that the authors revise the submission to include (at least) everything in that list

In the manuscript, we have already included references to 1., 2., and 3. The "Integration Rules" are part of the Rubi Mathematica package and they are explicitly mentioned in the Repository structure part of the README. The "catalog of integration rules" you mentioned in 4. is in my option not necessary here, because we can assume that a user reading this manuscript has Mathematica installed and is better off just opening the notebooks.

Would you agree or did I misunderstood your comment?

@danielskatz Is it possible to switch to numbers for referencing like this [1]? We cite several of our repositories and the website and referencing by (The Rubi Organization, 2018d) doesn't look so good. Also, is it possible to adjust the size and position of the inserted image?

@halirutan I read your submitted paper just as you describe it, so from my perspective I do not think a revision of the paper is necessary with respect to communicating scope. But as you say, the assumption in the JOSS boilerplate language of a single repository structure does not quite fit this particular application, so I want to make sure we are all on the same page.

@danielskatz A quality review of this software may take longer than is typical for JOSS. First, this software project is quite large. Second, if the functionality and performance claims of this project are verified, it would represent a significant advance in the current state of the art of symbolic computer algebra. The potential impact of this technology recommends a careful peer review, especially with respect to the items _Functionality, Performance_, and _Automated Tests_ of the Review Checklist. These items are also the largest components of this project. I will do my best to be expedient, but I want to communicate to you the size of the review task up front. To be clear, these comments should not be interpreted as a prejudgment of the submission, nor are they intended to in any way disparage the submitting authors, all of whom are well known and respected in their respective subfields for the quality of their work.

πŸ‘‹ @arfon - can you answer these questions about the paper compilation?

Is it possible to switch to numbers for referencing like this [1]? We cite several of our repositories and the website and referencing by (The Rubi Organization, 2018d) doesn't look so good. Also, is it possible to adjust the size and position of the inserted image?

Is it possible to switch to numbers for referencing like this [1]? We cite several of our repositories and the website and referencing by (The Rubi Organization, 2018d) doesn't look so good. Also, is it possible to adjust the size and position of the inserted image?

Some customization is possible, see the docs here: https://pandoc.org/MANUAL.html#citations . That said, I'm pretty sure it's not possible to render citations like this.

If a footnote would suffice in some places then you can use them instead: https://pandoc.org/MANUAL.html#footnotes

@rljacobson - If your review takes a significant amount of time, that's fine, as long as you are willing to do so.

Note that reviewing _Performance_ and _Automated Tests_ can be a review of Are there such things and Do they do what they claim, rather than Are they 100% complete, though you are of course able to suggest improvements. In some sense, you have to balance the amount of code review and spot checks that you do that increase your understanding of the software and its claims with the time you can spend on this.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@arfon Locally, it is easily possible to make references appear as [1] simply by setting a different citation style. This can be done inside the header of paper.md with e.g.

csl: transactions-on-mathematical-software.csl

I included the style file in the repo and, while this works locally with a similar call to pandoc that @whedon uses

pandoc -o test.pdf --filter pandoc-citeproc paper.md

it doesn't work here. I'm not sure why. I looked over the whedon code and it seems you clone the repo to a tmp directory and build it. Maybe something with the latex-template you are using interferes.

@arfon Locally, it is easily possible to make references appear as [1] simply by setting a different citation style. This can be done inside the header of paper.md with e.g.

Hi @halirutan - I'm afraid we don't offer customizations like this sorry. I realize it's theoretically possible, however, we want to stick to a single citation style.

πŸ‘‹ @acolum & @rljacobson - it looks like both reviews are making progress. I'm just checking in to see if either of you need anything. If you're just working through this, great, but if you are stuck somehow, please let me know.

@arfon

I'm afraid we don't offer customizations like this sorry.

No problem at all. I just wanted to make sure I didn't miss something. I will wait what the reviewers think. Maybe I'm converting the "repository citations" to footnotes with links if this is more appropriate. Thanks for looking into this.

Hi @acolum & @rljacobson - it looks like you both are working on your reviews - I'm just checking in to make sure everything is going ok - let me know if it's not.

I'm just checking in to make sure everything is going ok - let me know if it's not.

Yep, everything is going well. Thanks!

πŸ‘‹ @acolum & @rljacobson - I'm just checking in on your reviews again...

Doing ok, @danielskatz. Thanks for checking in.

@danielskatz I have completed my review.

The only open issue relating to the reviewers' checklist is a recommendation to clarify the first steps for new contributors. However, if you read the details of the issue, you will see that the submission has in many ways substantially met that guideline. It is my opinion that this issue should not prevent the paper's publication as it is, which publication I heartily recommend.

The other "issues" that remain open are only suggestions or what amount to feature requestsβ€”they do not reflect my evaluation in my role as reviewer of the paper's fitness for publication.

@halirutan @danielskatz I apologize for the long review period. It took some time to substantially confirm the functionality and performance claims of the software. In the words of Carl Sagan, "Extraordinary claims require extraordinary evidence." The software claims to significantly outperform the state of the art in symbolic integration. I have verified this claim on the project's test suite as well as my own suite of tests. It is quite remarkable.

Thanks very much @rljacobson !

@danielskatz I've also completed my review and am happy to recommend this for publication.

Thanks too @acolum !!

@halirutan - you might want to respond to some of @rljacobson's recommendations before we publish, perhaps at least the contributor guidelines one. Please do so, and when you are ready to publish, make an up to date repository in figshare or zenodo or similar, and post the DOI here in a comment.

Let me thank both reviewers for taking their time to look over Rubi.

To give a short summary of @rljacobson suggestions:

  1. I made clear in the README and the wiki how new contributors should start. Basically, it's "please talk to us on Gitter first", but to give some further guidance ...
  2. I have created issues for improvements that range from trivial to hard and tagged them as help wanted. This is also pointed out in the Wiki and a good starting point for new contributors.
  3. I improved the error message when certain functions are used incorrectly as suggested here.
  4. The remaining two suggestions have been turned into issues in the bug tracker so that contributors can work on them. Both issues would be a perfect fit since they are rather trivial but require some time. I still haven't lost the hope that people start contributing.

With these changes, I created a new release version 4.16.0.5 and an used Zenodo to get a DOI:

DOI

That should be all. Finally, thank you @danielskatz for guiding us along the way.

thanks @halirutan and sorry for the delay in getting to this - I missed the notification...

@whedon set 10.5281/zenodo.2234522 as archive

OK. 10.5281/zenodo.2234522 is the archive.

@whedon accept

Attempting dry run of processing paper acceptance...

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/119

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/119, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/120
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01073
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! πŸŽ‰πŸŒˆπŸ¦„πŸ’ƒπŸ‘»πŸ€˜

    Any issues? notify your editorial technical team...

πŸ‘‹ @arfon - it looks like something didn't work here - the pdf in the PR looks good, but it doesn't seem to be in http://joss.theoj.org/papers/10.21105/joss.01073 correctly. https://www.theoj.org/joss-papers/joss.01073/10.21105.joss.01073.pdf gives a 404 error.

@danielskatz - those links seem to work for me now. Perhaps GitHub was being a little slow?

maybe - thanks

@halirutan - your paper is now accepted into JOSS βš‘οΈπŸš€πŸ’₯

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](http://joss.theoj.org/papers/10.21105/joss.01073/status.svg)](https://doi.org/10.21105/joss.01073)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01073">
  <img src="http://joss.theoj.org/papers/10.21105/joss.01073/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.01073/status.svg
   :target: https://doi.org/10.21105/joss.01073

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings