Joss-reviews: [REVIEW]: pyuvsim: a comprehensive simulation package for radio interferometers in python.

Created on 4 Feb 2019  Â·  64Comments  Â·  Source: openjournals/joss-reviews

Submitting author: @aelanman (Adam Lanman)
Repository: https://github.com/RadioAstronomySoftwareGroup/pyuvsim/
Version: v0.2.1
Editor: @arfon
Reviewer: @ygrange
Archive: 10.5281/zenodo.2847055

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/cd19f6a8e807d57d3de8cde9f2abaeab"><img src="http://joss.theoj.org/papers/cd19f6a8e807d57d3de8cde9f2abaeab/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/cd19f6a8e807d57d3de8cde9f2abaeab/status.svg)](http://joss.theoj.org/papers/cd19f6a8e807d57d3de8cde9f2abaeab)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@ygrange, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @arfon know.

✨ Please try and complete your review in the next two weeks ✨

Review checklist for @ygrange

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: Does the release version given match the GitHub release (v0.2.1)?
  • [x] Authorship: Has the submitting author (@aelanman) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

The requested changes have been merged to the master branch, so this is ready for a final review. @ygrange @arfon

All 64 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @ygrange it looks like you're currently assigned as the reviewer for this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
Attempting PDF compilation. Reticulating splines etc...

@ygrange - please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

Any questions/concerns please let me know.

Just going through the checkboxes I found two comments:

  • Please add doi to the Dalcin, Paz and Storti paper ( https://doi.org/10.1016/j.jpdc.2005.03.010 )
  • There seems to be no tag for version 0.2.1 (there is a 0.2.0). I don't referee often enough for JOSS to know what the exact policy here is: to have the version released after my comments or to have a release now and add a new one at publication time if the code needs to change because of sometning I said. Maybe the editor should comment on this.

I think it's OK to review what is in the repository now (i.e. master) and then for there to be a new release before publication.

Perhaps @aelanman could clarify what they're asking to be reviewed.

Hello, @ygrange . We're asking for the contents of master to be reviewed. We've issued releases for every major version change (generation.major.minor) so far, but I assume we'll do another release after the review is finished.

I'll add the missing doi.

Thanks!

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

PDF failed to compile for issue #1234 with the following error:

% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed

0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 16 0 16 0 0 179 0 --:--:-- --:--:-- --:--:-- 181
Error reading bibliography ./paper.bib (line 134, column 1):
unexpected "v"
expecting space, ",", white space or "}"
Error running filter pandoc-citeproc:
Filter returned error status 1
Looks like we failed to compile the PDF

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

I just noticed that some authors' first names were cut from citations. I'll have to fix this, but I'm going to wait for more feedback first.

Just letting you know I got a bit delayed because I had some issues installing mpi4py on my centos test container. Now I have a running version and I can have a look at functionality.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

I'm sorry @aelanman, I'm afraid I can't do that. That's something only editors are allowed to do.

@arfon
Hi Arfon. It's been quite some time... can my collaborators and I expect a review soon?

I had some things taking up priority recently. Applogies! I have some
draft comments but I want to run through examples, read the docs to see
if I miss something, etc. I’ll plan time for it early next week
(mo/tue).

--
Y. Grange

Op 8 mrt. 2019 om 17:41 heeft Adam Lanman notifications@github.com
het volgende geschreven:

@arfon
Hi Arfon. It's been quite some time... can my collaborators and I
expect a review soon?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

Here it comes (again, with apologies for the delay)...

In general: I like the tool and the ambition it has to be a tool to work through the measurement equation without short cuts. I have several comments of which most are textual.

Paper

  • 1 minor comment: you list future low-frequency telescopes. Any reason you don't list LOFAR which is a current low-frequency radio telescope? I think such a tool could be very useful for current telescopes as well (for algorithm development and other R&D on the array).
  • In the readthedocs, you give a nice list of other tools that simulate radio data. You then mention what sets you apart from them (i.e. they all use apporximations to speed up). I would add this to the paper and if possible, would it be possible to
    quantify the effect of your fully analytic approach on the performance and precision (do I gain 10% precision in 200% of the run time; these kind of numbers)? This is the only comment that would require a bit of extra work. I am not really sure what the policy of JOSS is in this respect and whether this is in the category "nice to have" or "essential" in that case (@arfon: any comments on this?).
  • Also it would be nice if either the paper or the documentation could give potentially interested users a rough idea whether using this tool is the best fit to their use case as opposed to CASA, OSCAR, FHD, PRISim, etc.

  • "The measurement equation": add citation (Hamaker et al. 1996 ).

README.md

  • The README has a section "Inputs" but doesn't seem to mention outputs (It's a standard uvfits I guess so it would be a single line)
  • In both the README and the documentation the following quick-run command is given:
    mpirun -n 20 python run_param_pyuvsim.py reference_simulations/obsparam_1.1.yaml
    It should be
    mpirun -n 20 python scripts/run_param_pyuvsim.py reference_simulations/obsparam_ref_1.1.yaml
    and specify where to execute it from
  • Community guidelines are present, but it doesn't really say where one can get support (questions that are not per se bug reports). Is that via github issues?

readthedocs documentation

  • "Parameter and configuration Files" section doesn't state where all the files should be. I assume the paths are relative to the path of the main yaml file but It's best to explicitly write that down.
  • In the "Enabling Profiling" section
    To run a simulation with profiling enabled, run the command profiling.set_profiler before starting
    change to(...) run the command profiling.set_profiler() before

installation instruction and procedure

  • Installation: you only present how to build the master branch. Since you have releases, I think you should explicitly document hoe to build a (the latest?) release.
  • I would strongly advice to copy the installation instructions from readthedocs to the README.md file too.
  • Adding a requirements.txt could also make sense although I did notice that pyuvsim doesn't really do requirement installation well.
  • Also listing max an min versions if applicable (seems mpi4py 1.3.1 is too old for example; this is the version from Centos7 EPEL).

tests

  • Is there a reason you don't add the availability of nosetests to the documentation? If I know a package is testable, I will test it especially if it documents how to test it.
  • One test fails if h5py is not installed. Probably it is a dependency via pyuvdata, but it's worth documenting that you need h5py to pass all tests.
  • Also if you run the tests twice, the test_file.uvfits file will already exist and the system will cope by creating test_file_0.uvfits making the corresponding test to fail.

@ygrange Thank you for the feedback! I want to let you know that we've been working to address your comments. Most fixes have been made so far, but we're still trying to make a good comparison with other simulation tools.

It's difficult to quantify the performance in a meaningful way. Some of these simulators are very efficient along a particular axis but not others, and I'm not sure what to use as a consistent metric. We've started writing up a document comparing the architecture and features of pyuvsim with other simulators, as a way to recommend when it might be useful. you can find the document here

If you'd like to take a look at the document, you can let us know if we're on the right track.

The other difficulty is in quantifying precision/accuracy. For our work, "accuracy" mostly means avoiding unwanted numerical artifacts, and it's hard to put a percentage on that. What has usually happened in the past is we develop a new method or drop an approximation, and by doing so uncover an unexpected effect. If you have any suggestions of a good metric to report, we can try to make a comparison. It would be easier to point to a few examples of the unwanted effects we've managed to avoid.

Perhaps it would be better to emphasize that pyuvsim strives to be as _universally applicable_ as possible. Since we don't make any assumptions about what numerical or instrumental effects are important to the user, we want the base functionality to support an exact calculation for any interferometer design. This is only possible by enabling a full sky, full-polarization calculation, with per-antenna user-defined beam models.

I really like that document! That would make it much easier for me as a reader to appreciate the different tools around and why this is not "yet another tool".
I get the argument on precision/performance measurement. I think stating universal applicability is a good thing. It may also be a good idea to give a few examples of the effects you managed to avoid, especially if those actually occur in one or more of the mentioned alternative tools.

So I guess my main answer to your question is: yes, you are complerely on the right track here!

👋 @aelanman — it looks like we're waiting for you to address reviewer comments? Let us know your status. Cheers!

Very sorry for the delay! I believe we've addressed all of the comments with the latest PR.

Paper
* 1 minor comment: you list future low-frequency telescopes. Any reason you don't list LOFAR which is a current low-frequency radio telescope? I think such a tool could be very useful for current telescopes as well (for algorithm development and other R&D on the array).

Major oversight on my part! It's included now.

* In the readthedocs, you give a nice list of other tools that simulate radio data. You then mention what sets you apart from them (i.e. they all use apporximations to speed up). I would add this to the paper and if possible, would it be possible to quantify the effect of your fully analytic approach on the performance and precision (do I gain 10% precision in 200% of the run time; these kind of numbers)? This is the only comment that would require a bit of extra work. I am not really sure what the policy of JOSS is in this respect and whether this is in the category "nice to have" or "essential" in that case (@arfon: any comments on this?).

As per our previous discussion, I've included a new page linked on the RTD page that compares features of pyuvsim with three other simulators used in our field. For FHD I've briefly discussed the high delay power it introduces, when run as a simulator, as an example of one numerical artifact that pyuvsim avoids. For CASA I mention that its most common mode of operation grids point sources before calculating visibilities, which can introduce source modeling errors. The main context of the precision and accuracy improvements is that we need to avoid introducing spectral structure in foreground source power, and both of these effects are significant for our field of 21cm cosmology. I also mention that although it can support for polarized source models and wide fields of view, these uses are limited to customized wrappers tools and user-defined tools and are not the default operation.

* Also it would be nice if either the paper or the documentation could give potentially interested users a rough idea whether using this tool is the best fit to their use case as opposed to CASA, OSCAR, FHD, PRISim, etc.

The opening paragraph of the new comparison document cautions the user about the performance limitations of pyuvsim, as well as its robustness and applicability.

* "The measurement equation": add citation (Hamaker et al. 1996 ).

It's cited now. Thanks for pointing that out!

README.md

* The README has a section "Inputs" but doesn't seem to mention outputs (It's a standard uvfits I guess so it would be a single line)

A brief output section is included. pyuvsim is capable of writing out to any file format compatible with pyuvdata, so most of that we leave to the pyuvdata documentation.

* In both the README and the documentation the following quick-run command is given:
  `mpirun -n 20 python run_param_pyuvsim.py reference_simulations/obsparam_1.1.yaml`
  It should be
  `mpirun -n 20 python scripts/run_param_pyuvsim.py reference_simulations/obsparam_ref_1.1.yaml`
  and specify where to execute it from

Fixed.

* Community guidelines are present, but it doesn't really say where one can get support (questions that are not per se bug reports). Is that via github issues?

I've added a little bit to the README clarify this, but support questions should be given through github issues.

readthedocs documentation

* "Parameter and configuration Files" section doesn't state where all the files should be. I assume the paths are relative to the path of the main yaml file but It's best to explicitly write that down.

I've added a line to the file to clarify that the path can either be absolute or specified relative to the location of the obsparam file.

* In the "Enabling Profiling" section
  `To run a simulation with profiling enabled, run the command profiling.set_profiler before starting`
  change to`(...) run the command profiling.set_profiler() before `

Fixed.

installation instruction and procedure

* Installation: you only present how to build the master branch. Since you have releases, I think you should explicitly document hoe to build a (the latest?) release.

Installation instructions include a pip option, which will install the latest stable release. We will issue a new release when this review is finished as well.

* I would strongly advice to copy the installation instructions from readthedocs to the README.md file too.

The landing page of RTD is generated from the README.md file, so the installation instructions are on both.

* Adding a requirements.txt could also make sense although I did notice that pyuvsim doesn't really do requirement installation well.

We've added a requirements.txt and set minimum versions on all requirements. The installation instructions have been updated in the readme as well. We're also working on an anaconda package which should make automated installation much easier, but this will be built with the next release.

* Also listing max an min versions if applicable (seems mpi4py 1.3.1 is too old for example; this is the version from Centos7 EPEL).

As mentioned, we've set minimum version requirements. mpi4py needs to be at least version 3.0.

tests

* Is there a reason you don't add the availability of nosetests to the documentation? If I know a package is testable, I will test it especially if it documents how to test it.

It's mentioned now.

* One test fails if h5py is not installed. Probably it is a dependency via pyuvdata, but it's worth documenting that you need h5py to pass all tests.

We've added "test_requires" to setup.py, which includes h5py.

* Also if you run the tests twice, the `test_file.uvfits` file will already exist and the system will cope by creating `test_file_0.uvfits` making the corresponding test to fail.

I'm unable to recreate this behavior. Is it possible that one of your tests was interrupted before it could clean up?

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@aelanman - many thanks for the detailed response. @ygrange - when you get a chance, could you review @aelanman's responses and confirm that you're satisfied?

Ok, will do. Though I will have to push it to after easter.

Thanks for the complete reply to all the points I made! I will not reply to each of them separately because I agree with all points you make there.

Nice to see that testing is not only mentioned in the README, but decent testing is part of the design to begin with.

Paper looks good!

The following comments al are rather minor.

  • It may be sensible to link to comparison.rst from README.rst but that's fully up to you.

Installing on a clean Docker container: Seems like pyuvdata is not in the requirements.txt. The main other issue I encountered is that mpi4py depends on mpi (yeah, that indeed is quite obvious in hindsight ;)). I don't really consider that your problem though it may actually be fixed in the anaconda build, which is a great idea of you to have.

Please note that when I run the tests in a Docker container running ubuntu 18.04 on my mac laptop (2 cores), I will get failing tests:

pyuvsim.tests.test_mpi_uvsim.test_run_uvsim ... ERROR
pyuvsim.tests.test_mpi_uvsim.test_run_param_uvsim ... antenna_diameters is not set. Using known values for HERA.
[f58409b099d2:05741] *** Process received signal ***
[f58409b099d2:05741] Signal: Floating point exception (8)
[f58409b099d2:05741] Signal code: Integer divide-by-zero (1)
[f58409b099d2:05741] Failing at address: 0x7fa471a760a0
[f58409b099d2:05741] [ 0] /lib/x86_64-linux-gnu/libc.so.6(+0x3ef20)[0x7fa475204f20]
[f58409b099d2:05741] [ 1] /usr/lib/x86_64-linux-gnu/libopen-pal.so.20(+0x820a0)[0x7fa471a760a0]
[f58409b099d2:05741] [ 2] /usr/lib/x86_64-linux-gnu/libopen-pal.so.20(opal_progress_set_event_poll_rate+0x17)[0x7fa471a1ab47]
[f58409b099d2:05741] [ 3] /usr/lib/x86_64-linux-gnu/libopen-pal.so.20(opal_progress_init+0x19)[0x7fa471a1ab89]
[f58409b099d2:05741] [ 4] /usr/lib/x86_64-linux-gnu/libopen-pal.so.20(opal_init+0x151)[0x7fa471a1bc31]
[f58409b099d2:05741] [ 5] /usr/lib/x86_64-linux-gnu/libopen-rte.so.20(orte_init+0xc9)[0x7fa471cba079]
[f58409b099d2:05741] [ 6] /usr/lib/x86_64-linux-gnu/libmpi.so.20(ompi_mpi_init+0x30e)[0x7fa471f7527e]
[f58409b099d2:05741] [ 7] /usr/lib/x86_64-linux-gnu/libmpi.so.20(MPI_Init+0xb9)[0x7fa471f962f9]
[f58409b099d2:05741] [ 8] /usr/local/lib/python2.7/dist-packages/mpi4py/MPI.so(+0x87218)[0x7fa4722a7218]
[f58409b099d2:05741] [ 9] /usr/bin/python(PyEval_EvalFrameEx+0x54a)[0x564287aae4ca]
[f58409b099d2:05741] [10] /usr/bin/python(PyEval_EvalCodeEx+0x6da)[0x564287aabd0a]
[f58409b099d2:05741] [11] /usr/bin/python(PyEval_EvalFrameEx+0x5cb8)[0x564287ab3c38]
[f58409b099d2:05741] [12] /usr/bin/python(PyEval_EvalCodeEx+0x6da)[0x564287aabd0a]
[f58409b099d2:05741] [13] /usr/bin/python(PyEval_EvalFrameEx+0x567e)[0x564287ab35fe]
[f58409b099d2:05741] [14] /usr/bin/python(PyEval_EvalCodeEx+0x6da)[0x564287aabd0a]
[f58409b099d2:05741] [15] /usr/bin/python(+0x10f619)[0x564287ac7619]
[f58409b099d2:05741] [16] /usr/bin/python(PyObject_Call+0x3e)[0x564287a9777e]
[f58409b099d2:05741] [17] /usr/bin/python(PyEval_EvalFrameEx+0x2aa1)[0x564287ab0a21]
[f58409b099d2:05741] [18] /usr/bin/python(PyEval_EvalFrameEx+0x52b2)[0x564287ab3232]
[f58409b099d2:05741] [19] /usr/bin/python(PyEval_EvalCodeEx+0x6da)[0x564287aabd0a]
[f58409b099d2:05741] [20] /usr/bin/python(+0x10f8bc)[0x564287ac78bc]
[f58409b099d2:05741] [21] /usr/bin/python(PyObject_Call+0x3e)[0x564287a9777e]
[f58409b099d2:05741] [22] /usr/bin/python(PyEval_EvalFrameEx+0x2aa1)[0x564287ab0a21]
[f58409b099d2:05741] [23] /usr/bin/python(PyEval_EvalCodeEx+0x6da)[0x564287aabd0a]
[f58409b099d2:05741] [24] /usr/bin/python(+0x10f619)[0x564287ac7619]
[f58409b099d2:05741] [25] /usr/bin/python(+0x1280de)[0x564287ae00de]
[f58409b099d2:05741] [26] /usr/bin/python(PyObject_Call+0x3e)[0x564287a9777e]
[f58409b099d2:05741] [27] /usr/bin/python(+0x1845f7)[0x564287b3c5f7]
[f58409b099d2:05741] [28] /usr/bin/python(PyEval_EvalFrameEx+0x54a0)[0x564287ab3420]
[f58409b099d2:05741] [29] /usr/bin/python(PyEval_EvalFrameEx+0x52b2)[0x564287ab3232]
[f58409b099d2:05741] *** End of error message ***
Floating point exception

However I am not fully convinced it is a bug on your side because I cannot reproduce it inside a docker container running on a centos7 (64-core) system. So in that case: be aware that users may report this behaviour.

If I test a checkout from master, two tests fail:
pyuvsim.tests.test_version.test_construct_version_info (Fail) (I think that's because "master" != "version 0.2.3")
"Test function that defines filenames from parameter dict" (Error because of dependency on h5py)

Would it be possible to make those tests conditional in somw way? Especially the one that depends on h5py.

If you reply to the points made, I'm fully ok with proceeding without you guys having to wait for another OK from my side BTW.

@aelanman - please revisit this submission when you get a chance.

@arfon @ygrange Thank you for looking over the revisions. I've been trying to reproduce your errors. Are you installing via the repo or through the PyPi package?

The h5py dependence is only in one part of one test and should be easy enough to remove, so I'll do that.

I will also put a link to the comparison page in the README.

Hmkay, that's a bit frustrating. I will retry in the near future bur right now I don't have much time to do it and document it nicely and so on (sorry!), but what I did is build it in a clean ubuntu 18.04 docker container. The mpi I used is the default openmpi from that distro. Could be the MPI build I guess.

I tried an ubuntu 18.04 docker container with mpich and that worked, so it seems like it might be the open mpi build.

I had to remove pyuvdata from the requirements.txt because it broke the pip install -r requirements.txt build in the clean environment. I think the problem is that pyuvdata isn't really configured to be installed as a dependency via pip. I've been trying to work out what that issue is.

For now, though, I think I'll just add it to the installation instructions to install pyuvdata separately if installing with pip, and recommend the anaconda option once that's finished.

The requested changes have been merged to the master branch, so this is ready for a final review. @ygrange @arfon

The issue you report wrt pyuvdata doesn't sound completely unknown to me. For some reason I never mentioned it here but I think you are right: it somehow doesn't play well with pip.

My really absolutely final comment :)

Tests work almost like a charm using mpich, though I have one failure now in my setup:

yyy@017ab4273a6c:~/pyuvsim$ nosetests
............antenna_diameters is not set. Using known values for HERA.
..............................E................
======================================================================
ERROR: Test function that defines filenames from parameter dict
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/nose/case.py", line 197, in runTest
    self.test(*self.arg)
  File "/home/yyy/pyuvsim/pyuvsim/tests/test_utils.py", line 157, in test_write_uvdata
    os.remove(ofname + '.uvh5')
OSError: [Errno 2] No such file or directory: './temporary_test_data/test_file.uvh5'
-------------------- >> begin captured stdout << ---------------------
Outfile path:  ./temporary_test_data/test_file.uvfits
Outfile path:  ./temporary_test_data/test_file
Outfile path:  ./temporary_test_data/test_file_0

--------------------- >> end captured stdout << ----------------------

----------------------------------------------------------------------
Ran 59 tests in 606.297s

Looking into the code it looks like line 157 cleans up no matter whether the try block in 149-153 did. I could probably even fix it myself and do a pull request (move line 157 inside the try block, or add a finally block), but I am not so sure that is ethically very clean.

Testing with h5py installed does work, as expected.

Ah, sorry about that... I hadn't tried the tests without h5py. It should be redundant anyway. I've opened a PR with a fix. It should be merged within a couple hours.

Great!

Just for completeness: I advice the editor to proceed publishing!

--
Y. Grange

Op 9 mei 2019 om 18:26 heeft Adam Lanman notifications@github.com
het volgende geschreven:

Ah, sorry about that... I hadn't tried the tests without h5py. It
should be redundant anyway. I've opened a PR with a fix. It should be
merged within a couple hours.

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.

@arfon The final changes have been merged.

@whedon commands

Here are some things you can ask me to do:

# List all of Whedon's capabilities
@whedon commands

# Assign a GitHub user as the sole reviewer of this submission
@whedon assign @username as reviewer

# Add a GitHub user to the reviewers of this submission
@whedon add @username as reviewer

# Remove a GitHub user from the reviewers of this submission
@whedon remove @username as reviewer

# List of editor GitHub usernames
@whedon list editors

# List of reviewers together with programming language preferences and domain expertise
@whedon list reviewers

# Change editorial assignment
@whedon assign @username as editor

# Set the software archive DOI at the top of the issue e.g.
@whedon set 10.0000/zenodo.00000 as archive

# Set the software version at the top of the issue e.g.
@whedon set v1.0.1 as version

# Open the review issue
@whedon start review

EDITORIAL TASKS

# Compile the paper
@whedon generate pdf

# Compile the paper from alternative branch
@whedon generate pdf from branch custom-branch-name

# Remind an author or reviewer to return to a review after a
# certain period of time (supported units days and weeks)
@whedon remind @reviewer in 2 weeks

# Ask Whedon to accept the paper and deposit with Crossref
@whedon accept

# Ask Whedon to check the references for missing DOIs
@whedon check references

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@whedon check references

Attempting to check references...

```Reference check summary:

OK DOIs

  • 10.1093/mnras/stw2337 is OK
  • 10.5281/zenodo.2548117 is OK
  • 10.21105/joss.00140 is OK
  • 10.1088/0004-637X/759/1/17 is OK
  • 10.1051/0004-6361/201322068 is OK
  • 10.3847/1538-3881/aabc4f is OK
  • 10.1016/j.jpdc.2005.03.010 is OK
  • 10.1051/aas:1996146 is OK

MISSING DOIs

  • None

INVALID DOIs

  • None
    ```

@whedon accept

No archive DOI set. Exiting...

@aelanman - At this point could you make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission.

@arfon Okay. We've made new release and archived it on Zenodo here:
https://doi.org/10.5281/zenodo.2847055

@whedon set 10.5281/zenodo.2847055 as archive

OK. 10.5281/zenodo.2847055 is the archive.

@whedon accept

Attempting dry run of processing paper acceptance...

```Reference check summary:

OK DOIs

  • 10.1093/mnras/stw2337 is OK
  • 10.5281/zenodo.2548117 is OK
  • 10.21105/joss.00140 is OK
  • 10.1088/0004-637X/759/1/17 is OK
  • 10.1051/0004-6361/201322068 is OK
  • 10.3847/1538-3881/aabc4f is OK
  • 10.1016/j.jpdc.2005.03.010 is OK
  • 10.1051/aas:1996146 is OK

MISSING DOIs

  • None

INVALID DOIs

  • None
    ```

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/692

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/692, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

@ygrange - many thanks for your review here ✨

@aelanman - your paper is now accepted into JOSS and your DOI is https://doi.org/10.21105/joss.01234 :zap::rocket::boom:

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](http://joss.theoj.org/papers/10.21105/joss.01234/status.svg)](https://doi.org/10.21105/joss.01234)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01234">
  <img src="http://joss.theoj.org/papers/10.21105/joss.01234/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.01234/status.svg
   :target: https://doi.org/10.21105/joss.01234

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings