Submitting author: @Xunius (Guangzhi XU)
Repository: https://github.com/ihesp/IPART
Version: v3.0.8
Editor: @kbarnhart
Reviewer: @sadielbartholomew, @rabernat
Archive: 10.5281/zenodo.4164826
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status badge code:
HTML: <a href="https://joss.theoj.org/papers/e5e7c126fe26a82ab717e7d2ce70a1e1"><img src="https://joss.theoj.org/papers/e5e7c126fe26a82ab717e7d2ce70a1e1/status.svg"></a>
Markdown: [](https://joss.theoj.org/papers/e5e7c126fe26a82ab717e7d2ce70a1e1)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
@sadielbartholomew & @rabernat, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @kbarnhart know.
โจ Please try and complete your review in the next six weeks โจ
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @sadielbartholomew, @rabernat it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews ๐ฟ
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
PDF failed to compile for issue #2407 with the following error:
/app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in block in find': No such file or directory - tmp/2407 (Errno::ENOENT)
from /app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in
collect!'
from /app/vendor/ruby-2.4.4/lib/ruby/2.4.0/find.rb:43:in find'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-446a0298a33b/lib/whedon/processor.rb:61:in
find_paper_paths'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-446a0298a33b/bin/whedon:50:in prepare'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/command.rb:27:in
run'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in invoke_command'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor.rb:387:in
dispatch'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/base.rb:466:in start'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-446a0298a33b/bin/whedon:119:in
load'
from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in
@whedon generate pdf
PDF failed to compile for issue #2407 with the following error:
Can't find any papers to compile :-(
@arfon / @openjournals/joss-eics could you help me ensure the repository is linked correctly to this review issue. Issues may have cropped up because the authors changed the title of the submission and the repository address during the pre-review stage (#2197).
I can edit the title/comments on this thread... @whedon could find the paper in the review issue (e.g., here)
repo: https://github.com/ihesp/IPART
title: "IPART: A Python Package for Image-Processing based Atmospheric River Tracking"
thanks!
@whedon generate pdf
PDF failed to compile for issue #2407 with the following error:
Can't find any papers to compile :-(
๐ @openjournals/dev - can you take a look at this?
@whedon generate pdf
It was the bold markup in the repo link. It was confusing for Whedon. I removed it and now it works.
@xuanxu thank you! Pretty confident that was my fault.
@kbarnhart Just a quick question: I guess I should not update the master branch during the review stage, should I?
@Xunius answer probably depends on what sort of changes you are making. I would not recommend totally restructuring the package. But if you have more moderate changes (adding a feature, updating the documentation, refactoring something internal), I don't see a reason why those sorts of changes would cause a problem. If you make a change that substantially modifies the dependencies such that a reviewer would need to re-create their compute environment used to test out and review IPART, that would probably not be advisable.
Using an issue to describe why changes were made, and then a Pull Request that bring those changes into master will also make it clear what changes were made and why.
@sadielbartholomew and @rabernat I wanted to check in with a friendly reminder that we are asking reviewers to complete their reviews within 6 weeks (in about 2 weeks). Please let me know how I can assist throughout the JOSS review process.
Noted, thanks @kbarnhart. As this stage I do not feel I need assistance, I have simply not had much time in recent weeks to move on & finish my review. I plan, and will strive, to have my review completed by the end of this week i.e. the end of July.
Thanks for the reminder. Indeed this had fallen off my radar. I will hopefully be able to get to it today.
@kbarnhart I got another comment regarding installation. I'd like to propose a merge of the netcdf4
branch to master, which would replace the cdat
dependency with netcdf4
. I already got some user feedbacks complaining about the cdat
dependency, some about it not working well in google colab, some about it not supported in windows. I think this proposed change would make packaging to pip or conda-forge easier. I've already implemented the changes mentioned in the other comment API documentation: undocumented parameters & methods, so we are not loosing the new corrections.
I am quite biased here, but as long as you are considering revamping your I/O routines, I would suggest you to look at xarray.
The requirement to operate on files period is a big limitation of this tool. What if the user's data are not in netCDF format? Other formats, like GRIB or Zarr, are common. (For example, there are nearly 1PB of CMIP6 data in Google Cloud in Zarr format.) Or if the user's data are not in files at all, but rather computed on the fly from some other source?
If you packages routines were able to accept xarray Dataset objects, you could effectively forget about dealing with files. Let the user open the files and pass an xarray.Dataset
to your tool. This object should contain all raw numpy data, plus any metadata you need to process. This is the approach used by xESMF (a regridding package), and it works very well.
We wrote a little bit about this concept here: http://pangeo.io/packages.html#guidelines-for-new-packages
I want to clarify that I don't think this sort of refactor is required to have your JOSS paper published. It's just an optional suggestion based on lots of experience with this sort of thing.
@Xunius I think that changing the dependency makes sense. I had hoped that some of my early feedback in pre-review on installation instructions and packaging would have addressed some of these install issues. This is all to say that I strongly support @rabernat 's comments on ihesp/IPART/issues/5 to follow installation and practicing best practices.
I also favor xarray over netcdf4, though if you have reasons for one dependency over another, that is your decision to make.
Similarly, I've had good success with taking I/O out of the core of a package (e.g., a user is expected to provide a numpy array, or xarray.Dataset) rather than a file. A user can convert many common file formats into these data-structures in one or two lines of code, so I don't think it places an unnecessary burden on users.
Regarding which things are and are not necessary to complete the JOSS process: I think that good packaging definitely is necessary and I would agree with @rabernat that refactoring the I/O is not necessary. You may choose to do it because it makes packaging easier, but that is up to you.
If additional extensive discussion on this point is necessary, I may move the thread to an in-repo issue.
As always, let me know how I can help with questions/clarifications.
@sadielbartholomew thanks for the update. All sounds good.
I'd like to propose a merge of the netcdf4 branch to master, which would replace the cdat dependency with netcdf4. I already got some user feedbacks complaining about the cdat dependency, some about it not working well in google colab, some about it not supported in windows. I think this proposed change would make packaging to pip or conda-forge easier
@Xunius I think that changing the dependency makes sense.
I second/third that this would be a good idea. Further evidence that this is causing installation difficulty being that the (immediate) aspect that seemed to prevent python setup.py develop
from working (before it was cut out for that reason in ihesp/IPART#4) was cdutil
:
...
No local packages or working download links found for cdutil
error: Could not find suitable distribution for Requirement.parse('cdutil')
which appears to be a CDAT-related tool. However, that is just an illustration since I do also agree with the sentiment in ihesp/IPART#5 that you should aim to support a standard installation via pip install
and/or conda install
as that development install command is designed for those wanting to make changes to the codecase rather than use it as a package.
Regarding the I/O points, whilst I do agree that:
NetCDF (as stated in the current paper) is (surely) still the most widely used format for geoscience data. So it is most likely the user has their data contained in netCDF file(s), but in the minority of cases they do not, I think the central point is, as @kbarnhart notes:
A user can convert many common file formats into these data-structures in one or two lines of code
but also vice versa, such that in answer to:
What if the user's data are not in netCDF format?
ultimately it of course becomes the case of manual conversion and/or use of numerous tools that can straightforwardly convert between formats, before use of the IPART API. Slightly more work naturally , but I don't think it is a massive barrier to usage of, nor a major deficiency, to IPART, more a 'nice-to-have'. (Saying that, as a caveat note that I sense I have a lot less experience than everyone here in the field.)
Continuing on the topic of improvements that are not compulsory towards acceptance in the paper given the open criteria, but would be good to think about going forwards, for good practice with metadata I suggest making more use of the CF Conventions (the recommended standard for netCDF), namely as described in the three points below.
Increasing the compliance of the datasets included in the repository to the CF Conventions, especially those under the notebooks
directory which users may interact with if they try out IPART with the provided Notebooks. Notably both uflux_s_6_1984_Jan.nc
& vflux_s_6_1984_Jan.nc
provided there are marked by global attribute as being CF-compliant to CF 1.6:
:Conventions = "CF-1.6" ;
which is okay (relative to the ideal, latest version, 1.8), but immediately I see improvements in compliance that could be made.
For example, the variable & dimensions are all described by a long_name
attribute, where use of a standard_name
attribute is preferable as each is unambiguous (see e.g. here). The time, lat & lon coordinates can take standard names of the same identifier as currently used for the long name, and from a quick search on the names table for "eastward" AND "vapor" I think the data itself with long_name=Vertical integral of eastward water vapour flux
and units kg m**-1 s**-1
could probably be assigned a standard name of eastward_atmosphere_water_vapor_transport_across_unit_distance
, or similar.
Rephrasing aspects of the 'Data preparation' section of the documentation in terms of terminology from the CF Conventions. For instance, instead of stating there that:
Source data are the u- and v- components of the vertically integrated vapor fluxes, in a rectangular grid.
you could explicitly state the standard names of data variables which would be applicable, e.g. something similar to northward_...
and eastward_atmosphere_water_vapor_transport_across_unit_distance
and maybe link to the definition of all grids which may be considered rectangular, for clarity. This would make it crystal clear whether a user's dataset(s) may be appropriate for processing by IPART.
Making code changes to accommodate conventions. In particular, the required ordering of dimensions for IPART seemingly go against conventions given that the 'Data preparation' section states that: "the user is responsible for making sure that the data are saved in the following rank order: (time, level, latitude, longitude)" but as conveyed in this section "The CF convention places no rigid restrictions on the order of dimensions, however we encourage data producers to make the extra effort to stay within the COARDS standard order" where "COARDS restricts the axis (equivalently dimension) ordering to be longitude, latitude, vertical, and time".
So, you are advocating that users define data dimensions in the inverse order to that recommended. To make IPART more immediately accessible, you could amend your code so that it accepts the outlined conventional order, rather than the inverse.
I plan, and will strive, to have my review completed by the end of this week i.e. the end of July.
Sorry, I am running a bit later than this, but I promise to complete my review in the next few days.
Just to note regarding some newly-ticked review points:
I'd like to propose a merge of the netcdf4 branch to master, which would replace the cdat dependency with netcdf4
@Xunius I notice you have now merged in your netcdf4
branch commits. This is good, & I will check the installation again with this new dependency set.
I notice you have added more to the documentation too, which is also good, however in case you are not aware I have noticed that the API documentation pages (e.g. 'Documentation page for AR_detector.py') have become blank after your latest changes, so please make sure these are re-populated.
Just as a reminder, you should update the 'External libraries used' section of the JOSS paper content to reflect the cdat
-> netcdf4
change. On that note, I have noticed some points in the paper itself which are not clear, and some minor formatting issues, so wll put in a PR in a moment to suggest how these can be resolved.
@sadielbartholomew thanks for the update and these comments. No worries about timeframe, you and @rabernat have been very punctual with your reviews (JOSS also is striving to be realistic and understanding about how COVID-19 has impacted reviewer and author availability).
Continuing on the topic of improvements that are not compulsory towards acceptance in the paper given the open criteria, but would be good to think about going forwards, for good practice with metadata I suggest making more use of the CF Conventions (the recommended standard for netCDF), namely as described in the three points below.
1. Increasing the compliance of the datasets included in the repository to the CF Conventions, especially those under the `notebooks` directory which users may interact with if they try out IPART with the provided Notebooks. Notably both `uflux_s_6_1984_Jan.nc` & `vflux_s_6_1984_Jan.nc` provided there are marked by global attribute as being CF-compliant to CF 1.6: ``` :Conventions = "CF-1.6" ; ``` which is okay (relative to the ideal, latest version, 1.8), but immediately I see improvements in compliance that could be made. For example, the variable & dimensions are all described by a `long_name` attribute, where use of a `standard_name` attribute is preferable as each is unambiguous (see e.g. [here](http://cfconventions.org/cf-conventions/cf-conventions.html#long-name)). The time, lat & lon coordinates can take standard names of the same identifier as currently used for the long name, and from a quick search on the [names table](https://cfconventions.org/Data/cf-standard-names/73/build/cf-standard-name-table.html) for "eastward" AND "vapor" I think the data itself with `long_name=Vertical integral of eastward water vapour flux` and units `kg m**-1 s**-1` could probably be assigned a standard name of `eastward_atmosphere_water_vapor_transport_across_unit_distance`, or similar. 2. Rephrasing aspects of the 'Data preparation' section of the documentation in terms of terminology from the CF Conventions. For instance, instead of stating there that: > Source data are the u- and v- components of the vertically integrated vapor fluxes, in a rectangular grid. you could explicitly state the standard names of data variables which would be applicable, e.g. something similar to `northward_...` and `eastward_atmosphere_water_vapor_transport_across_unit_distance` and maybe link to the definition of all grids which may be considered rectangular, for clarity. This would make it crystal clear whether a user's dataset(s) may be appropriate for processing by IPART. 3. Making code changes to accommodate conventions. In particular, the required ordering of dimensions for IPART seemingly go against conventions given that the 'Data preparation' section states that: "the user is responsible for making sure that the data are saved in the following rank order: (time, level, latitude, longitude)" but as conveyed in [this section](http://cfconventions.org/cf-conventions/cf-conventions.html#coards-relationship) "The CF convention places no rigid restrictions on the order of dimensions, however we encourage data producers to make the extra effort to stay within the COARDS standard order" where "COARDS restricts the axis (equivalently dimension) ordering to be longitude, latitude, vertical, and time". So, you are advocating that users define data dimensions in the inverse order to that recommended. To make IPART more immediately accessible, you could amend your code so that it accepts the outlined conventional order, rather than the inverse.
Thanks, I'll think about these. Do you mind if I copy your comments to a new issue in the IPART repo, just to make it easier for me get back to them later?
Regarding Point 3. The major reason for using the time, lev, lat, lon ordering is that the reanalyses seem to follow this (e.g. the sample ERA-I data included in the notebooks, and the MERRA2 which the ARTMIP community tries to make a standard for reanslysis based AR detections). So I felt that it would feel a bit confusing the inputs and outputs are in different orderings. Or maybe it would be also good to somehow detect the input ordering (in a robust way) and apply the same ordering to the outputs. I'll have to think about this later. Other changes seem easy enough to implement quickly.
I plan, and will strive, to have my review completed by the end of this week i.e. the end of July.
Sorry, I am running a bit later than this, but I promise to complete my review in the next few days.
Just to note regarding some newly-ticked review points:
* I have checked off 'Community guidelines' as they are now clear in both the docs & README after [ihesp/IPART@def1379](https://github.com/ihesp/IPART/commit/def1379d471d51bb9faa567fa40018f1f58fed8b); * I am checking off the 'Performance' item since I cannot find any performance claims.
I'd like to propose a merge of the netcdf4 branch to master, which would replace the cdat dependency with netcdf4
@Xunius I notice you have now merged in your
netcdf4
branch commits. This is good, & I will check the installation again with this new dependency set.I notice you have added more to the documentation too, which is also good, however in case you are not aware I have noticed that the API documentation pages (e.g. 'Documentation page for AR_detector.py') have become blank after your latest changes, so please make sure these are re-populated.
Just as a reminder, you should update the 'External libraries used' section of the JOSS paper content to reflect the
cdat
->netcdf4
change. On that note, I have noticed some points in the paper itself which are not clear, and some minor formatting issues, so wll put in a PR in a moment to suggest how these can be resolved.
CDAT
to netCDF
in joss paper in this commit. I guess we should call the bot to rebuild the PDF?@whedon generate pdf
@Xunius its definitely fine to copy (with attribution) @sadielbartholomew 's comments into an in-repo issue. I would recommend doing that.
Also, in order for whedon the command must be the first part of the comment.
@whedon generate pdf
PDF failed to compile for issue #2407 with the following error:
Error reading bibliography ./paper.bib (line 9, column 3):
unexpected "u"
expecting space, ",", white space or "}"
Error running filter pandoc-citeproc:
Filter returned error status 1
Looks like we failed to compile the PDF
Thanks @Xunius.
Thanks, I'll think about these. Do you mind if I copy your comments to a new issue in the IPART repo, just to make it easier for me get back to them later?
Not at all, please do. Sorry, maybe it would have been better for me to have put them in an Issue originally.
Regarding Point 3. The major reason for using the time, lev, lat, lon ordering is that the reanalyses seem to follow this (e.g. the sample ERA-I data included in the notebooks, and the MERRA2 which the ARTMIP community tries to make a standard for reanslysis based AR detections). So I felt that it would feel a bit confusing the inputs and outputs are in different orderings.
Ah I see; since there's already a more specific standard you are following, it's of course best to leave it as-is. So long as you have actively thought about the best order, ideally abiding by some formal or informal community standard, I think it is fine.
Or maybe it would be also good to somehow detect the input ordering (in a robust way) and apply the same ordering to the outputs. I'll have to think about this later.
This is certainly the best solution going forward though, I would say. Anything that minimises the amount of pre-processing a potential user of IPART would have to do on any dataset they might want to use, would be useful.
Possibly my PR on the text broke the formatting somehow. If so, my apologies. Though from the note on https://github.com/openjournals/joss-reviews/issues/2407#issuecomment-668609394 it appears to stem from the BibTeX/bibliography so it may be a different change. Either way, let me know if I can help to fix it.
@whedon generate pdf
Should just be my incorrect bib file formatting.
Thanks @Xunius, and for your patience. I will return to my review tomorrow evening (I am a bit later than hoped last week given I have been asked to work on a new project at short notice). Hopefully I can finish the review then, if not many days after tomorrow. In general, I am further along than my checkmarks on the criteria list suggest, since I have started looking into unchecked items and in many cases are close to being happy to approve them.
@sadielbartholomew No problem, you have all been very helpful and I appreciate your time and efforts in improving this project.
The references are very comprehensive & appropriately cited, which is great. @Xunius there are two small points (too minor for a dedicated Issue I think) I would like addressed before I tick the 'software paper -> references' item, though:
two of the links seem to be broken, or at least they do not work for me right now:
pandas (https://github.com/ihesp/IPART/blob/0ef1561a7fe23db4723bb289ce07b1ae74a9a899/joss/paper.bib#L73-L81)
so please double check the information for those entries are correct. If they are, I will try them again in case it was temporary e.g. caused by some server downtime or similar.
@whedon generate pdf
@sadielbartholomew Please checkout this commit:
url
field to the pandas reference.Ralph2011
citation. I double checked the DOI (10.1175/2010MWR3596.1) which seems to be correct, see here and the paper PDF. I also tried searching the DOI in https://www.doi.org/, it does lead to a broken page. Maybe there is something wrong in the doi parsing, I don't know.UPDATE:
The DOI of pandas is taken from https://pandas.pydata.org/about/citing.html. Clicking on it in the paper leads to an empty page. But entering and searching for it in https://www.doi.org/ leads me to https://conference.scipy.org/proceedings/scipy2010/mckinney.html.
Hi @kbarnhart, do you have any advice on procedure regarding citations with links that are correct but don't seem to work for some unknown reason? Is it okay to keep those in the paper, or should the URL be removed for safety in case in generally does not work as opposed to temporarily being down:
Regarding the Ralph2011 citation. I double checked the DOI (10.1175/2010MWR3596.1) which seems to be correct, see here and the paper PDF. I also tried searching the DOI in https://www.doi.org/, it does lead to a broken page. Maybe there is something wrong in the doi parsing, I don't know.
Other than this query, I am very happy with the references so can check that item off the review list.
@sadielbartholomew @Xunius It does appear that the DOI link is broken on the Monthly Weather Review site. I'll consult with the JOSS editorial board about what the best approach to handling this is.
@whedon check references
Reference check summary:
OK DOIs
- 10.5281/zenodo.2586088 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.7717/peerj.453 is OK
- 10.1175/1520-0493(1998)126<0725:APAFMF>2.0.CO;2 is OK
- 10.1175/1520-0493(2004)132<1721:SACAOO>2.0.CO;2 is OK
- 10.3389/feart.2014.00002 is OK
- 10.1175/BAMS-84-9-1205 is OK
- 10.1175/1520-0434(2002)017<1152:ITDATO>2.0.CO;2 is OK
- 10.1175/JCLI-D-15-0216.1 is OK
- 10.1007/BF01386390 is OK
- 10.1109/TPAMI.1986.4767851 is OK
- 10.1175/2010MWR3596.1 is OK
- 10.1002/2015JD024257 is OK
- 10.1111/j.1752-1688.2011.00546.x is OK
- 10.1175/JHM-D-13-02.1 is OK
- 10.1002/asl.392 is OK
- 10.1029/2012JD018027 is OK
- 10.1175/JCLI-D-13-00212.1 is OK
- 10.1175/2007JHM855.1 is OK
- 10.1175/MWR-D-11-00126.1 is OK
- 10.1175/JCLI-D-15-0655.1 is OK
- 10.1002/2014GL060299 is OK
- 10.1002/2014GL060881 is OK
- 10.1109/TGRS.2012.2211024 is OK
- 10.1002/qj.828 is OK
- 10.1175/2009MWR3193.1 is OK
- 10.1109/83.217222 is OK
- 10.1175/MWR-D-13-00168.1 is OK
- 10.1175/MWR-D-14-00288.1 is OK
- 10.1175/MWR-D-15-0279.1 is OK
- 10.1002/2017gl075495 is OK
- 10.1002/qj.49712354211 is OK
- 10.1175/JCLI-D-14-00567.1 is OK
- 10.1109/TSMC.1979.4310076 is OK
- 10.7717/peerj.453 is OK
- 10.1029/2019JD031205 is OK
- 10.1029/2018JD029180 is OK
- 10.1029/2019JD031218 is OK
- 10.1029/2019JD030936 is OK
- 10.5194/gmd-11-2455-2018 is OK
- 10.3390/atmos11060628 is OK
MISSING DOIs
- https://doi.org/10.1109/mcse.2011.37 may be missing for title: The NumPy Array: A Structure for Efficient Numerical Computation
- https://doi.org/10.1016/c2017-0-03921-6 may be missing for title: Statistical methods in the atmospheric sciences
INVALID DOIs
- https://doi.org/10.5065/D6H70CW6 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1038/s41592-019-0686-2 is INVALID because of 'https://doi.org/' prefix
- https://doi.org/10.1016/S0146-664X(72)80017-0 is INVALID because of 'https://doi.org/' prefix
A follow up regarding what to do about the Ralph et al. (2011) DOI. Based on consultation with the rest of the JOSS editorial board, we think the best thing to do is nothing. The citation information provided in the .bib file is correct and the paper is real. I don't know exactly why the DOI doesn't work, but it is clearly the correct DOI.
@whedon generate pdf
this commit fixed the 3 invalid DOIs, added the missing DOI (10.1109/mcse.2011.37) for numpy, and deleted the record for _Statistical methods in the atmospheric sciences_ which is not referenced in paper.
A follow up regarding what to do about the Ralph et al. (2011) DOI. Based on consultation with the rest of the JOSS editorial board, we think the best thing to do is nothing.
Thanks for asking about this and reporting back, @kbarnhart. That sounds like a sensible decision to me.
@whedon check references
Reference check summary:
OK DOIs
- 10.5065/D6H70CW6 is OK
- 10.5281/zenodo.2586088 is OK
- 10.1109/mcse.2011.37 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.7717/peerj.453 is OK
- 10.1175/1520-0493(1998)126<0725:APAFMF>2.0.CO;2 is OK
- 10.1175/1520-0493(2004)132<1721:SACAOO>2.0.CO;2 is OK
- 10.3389/feart.2014.00002 is OK
- 10.1175/BAMS-84-9-1205 is OK
- 10.1175/1520-0434(2002)017<1152:ITDATO>2.0.CO;2 is OK
- 10.1175/JCLI-D-15-0216.1 is OK
- 10.1007/BF01386390 is OK
- 10.1016/S0146-664X(72)80017-0 is OK
- 10.1109/TPAMI.1986.4767851 is OK
- 10.1175/2010MWR3596.1 is OK
- 10.1002/2015JD024257 is OK
- 10.1111/j.1752-1688.2011.00546.x is OK
- 10.1175/JHM-D-13-02.1 is OK
- 10.1002/asl.392 is OK
- 10.1029/2012JD018027 is OK
- 10.1175/JCLI-D-13-00212.1 is OK
- 10.1175/2007JHM855.1 is OK
- 10.1175/MWR-D-11-00126.1 is OK
- 10.1175/JCLI-D-15-0655.1 is OK
- 10.1002/2014GL060299 is OK
- 10.1002/2014GL060881 is OK
- 10.1109/TGRS.2012.2211024 is OK
- 10.1002/qj.828 is OK
- 10.1175/2009MWR3193.1 is OK
- 10.1109/83.217222 is OK
- 10.1175/MWR-D-13-00168.1 is OK
- 10.1175/MWR-D-14-00288.1 is OK
- 10.1175/MWR-D-15-0279.1 is OK
- 10.1002/2017gl075495 is OK
- 10.1002/qj.49712354211 is OK
- 10.1175/JCLI-D-14-00567.1 is OK
- 10.1109/TSMC.1979.4310076 is OK
- 10.7717/peerj.453 is OK
- 10.1029/2019JD031205 is OK
- 10.1029/2018JD029180 is OK
- 10.1029/2019JD031218 is OK
- 10.1029/2019JD030936 is OK
- 10.5194/gmd-11-2455-2018 is OK
- 10.3390/atmos11060628 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Great, thanks @Xunius the references are all good.
Though, if I may be pedantic, I've noticed that the reference for networkx
that is correct in the .bib
is converted to lower case by BibTeX and therefore on the rendered paper it goes from having the correct capitalisation, as in:
https://github.com/ihesp/IPART/blob/20b5f349c100f9577b494c8e0cf995fbede80a8d/joss/paper.bib#L111
to being all lower-case ("Proceedings of the 7th python in science conference"). It seems minor, but Python should strictly always be capitalised so we should fix this. I already played around to see what might be going on for my own understanding, and know how to fix it, so I put in a small PR, as referenced above.
I'm checking off the references component. I hope to finish my review by Friday.
@whedon generate pdf
@sadielbartholomew Thanks. PR https://github.com/ihesp/IPART/pull/11 merged.
@kbarnhart Hi, just to follow up the process, are there any new comments coming in? Where are we at the moment in the entire review process? Thanks.
@Xunius thanks for checking in. The JOSS review process can sometimes be a bit more iterative than a traditional journal. For example, if there is an issue that prevents a reviewer from completing their review (e.g., installation issues), then they may start the review, make an issue, wait for the authors address it, and then the reviewers can resume their review.
It appears that there are a few outstanding issues
_necessary:_
_nice but not necessary:_
As best as I can tell, you have made great progress in addressing (1) and (2). I've made a small note regarding incorporating tests into the conda-forge meta.yaml in the relevant issue.
As best as I can tell, once that is addressed, you will have addressed (1) and (2), and then it would be appropriate to ping me and the reviewers here to let us know that it makes sense for the review to proceed.
Let me know if you have any further questions or concerns.
@sadielbartholomew and @rabernat thanks for your review comments thus far. I wanted to check in and see what your time-frame is for revisiting and potentially completing your reviews. As best as I can tell @Xunius has addressed the current outstanding issues that this review has raised.
Sorry for my long silence here. I just left some comments on both my issues. I appreciate everyone's patience.
@rabernat thanks for the update and those comments on the relevant issue threads.
JOSS is trying to be mindful of the impact of COVID on people's availability (see statement of reduced service mode above). Which is to say that we understand things may take longer at the moment.
Thanks to everyone for their patience as well with respect to my review. I am hoping to be able to finish my review tonight, if not by the mid-week at latest.
Thanks for the updates @rabernat and @sadielbartholomew. ๐
(I'm commenting regarding review points out of order with respect to the review list here, purely to address the 'low hanging fruit' firstly.)
@Xunius with regards to the 'Quality of writing' point of the paper itself, I think it is very well written and clear, so well done for that. It is all great as far as I am concerned.
Before I tick that off, though, I wanted to check if you would agree that it would be better to break up the initial 'Summary' section, for the reasons following? I think breaking it into two separate sections would be useful to readers because currently there are four long paragraphs (plus one short one) there, and ideally a summary is shorter and more concise. But also it strikes me that the first two paragraphs are solely about the background , and only the final three discuss IPART itself. So to me there is a natural partition.
Therefore, I would suggest the first two paragraphs can be moved under a section called 'Background' and the rest can stay as the 'Summary'. However, looking at other JOSS papers published I see that plenty have long summary sections, so it is not a blocker to acceptance, by any means. Maybe others prefer one longer section as it flows better, for example. It is of course your paper so if you prefer to keep the whole summary text together as-is, that is fine. What do you think?
(And if anyone else has thoughts on the above, it would be great to hear them.)
Thanks for these comments @sadielbartholomew. I think that organizing the beginning of the paper either way would be fine. I generally tend towards using descriptive section titles, so I'd lean slightly towards adding them.
@Xunius I wanted to check in and see if you had any thoughts regarding @sadielbartholomew's comment about paper structure.
At present there are two remaining issues in the source repository associated with this review. I've just commented there to move us forward. Once those are resolved I suspect it would be appropriate for @sadielbartholomew and @rabernat to re-assess the review checklist and identify if there are any remaining issues or if the submission is ready to move forward.
As always, let me know if you have any questions or comments.
@kbarnhart Thanks for the notice. I'll come back in a few days.
@whedon generate pdf
@sadielbartholomew @kbarnhart I've just updated the JOSS.md structure to add a opening "Background" section as suggested.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
I'm happy to approve this submission. Thanks @Xunius for all your hard work. We asked a lot! I think your package has come a long way.
I'm glad to hear there is a first approval. I am wary that this is now waiting on me so I will re-assess the checklist as soon as I can. I am quite busy until the weekend but I will try to complete the review on outstanding items by the end of this week. Thanks to all for their patience.
@sadielbartholomew thanks for the info. No problem on the timeline.
Thanks to @Xunius for making those changes.
@whedon generate pdf
@rabernat @kbarnhart @sadielbartholomew Thanks for all your efforts put into this work.
I'm writing also to inform you that I'm making some minor edits to the joss paper and the README by adding the the reference to the paper on this method that just got published a few days ago. The changes are in this commit.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@Xunius thank you for the update. I wanted to ask if you could clarify the difference between this recently published paper in GMD and the manuscript considered here?
@kbarnhart The GMD one contains the full derivation of the methods, parameter sensitivity tests, some comparisons with existing methods in the form of case studies, and lots of discussions on the implications on climate projection applications etc.. While the JOSS manuscript is providing just enough information to support the Python implementation of the algorithms. The GMD paper was actually submitted earlier than JOSS, it also took a long time for the review.
Hi @Xunius , while I am looking into the issue referenced above which is likely to be a user error on my part, I wanted to make one final small suggestion for an update to the paper itself.
I noticed that, in September and since we began this review, NumPy released a new paper which they now document as their preferred citation reference, as described on this page. I think it might be nice to use that up-to-date reference rather than one from 2011. It is of course up to you as to which you go with, though; feel free to keep the old one if you prefer.
@whedon generate pdf
@sadielbartholomew Thanks for the update. I've updated the citation.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@sadielbartholomew I wanted to get a sense on when you anticipate completing your review. Thanks!
Hi @kbarnhart, I have a little time later tonight to have a look. I'll see how far I can get and let you know a new estimate after that.
I was seeing some errors (as per linked Issue above) that are probably related to issues in my environment, but I need to be sure that is in fact the case before I can proceed. Until I fix those I can't test the functionality etc. properly. So that is what is holding me up.
I'll finish my review by Wednesday morning (UK time). Thanks again to all for your patience.
No problem @sadielbartholomew thanks for the update.
I'll finish my review by Wednesday morning (UK time). Thanks again to all for your patience.
Nearly there! Bear with me, I will report back by tonight. Thanks.
@sadielbartholomew No problem. Take your time.
I am very happy to now recommend this paper for publication, with many congratulations to Guangzhi and thanks for their patience in waiting for the final review. IPART is clearly an innovative package and I expect it will be a very useful tool for research into or involving atmospheric rivers.
I do have a small number of final comments/questions about the paper content that @Xunius may or may not wish to make some last amendments based upon. Either way, it is not a blocker to my acceptance:
at one point in the paper you state:
"under a warming climate as the atmospheric moisture level is expected to increase"
I take it this is a well-known and well-grounded statement (I personally don't know how established that is as a fact)? If it is not, or not so much, I advise you consider citing something against it for validation or further detail, etc.
at another point there is a general statement that is worded in a way which I confident is not strictly technically correct, namely:
using the Python programming language as a wrapper to some well-established numeric packages including
numpy
,scikit-image
andnetworkx
etc.
I don't think it is right to call Python a 'wrapper' to Python modules. If anything it would be vice versa with e.g. numpy
"wrapping" (in the sense of extending) Python, but that is also not really true I would say. So I suggest that you re-word that sentence. Something like "Python programming language with some well-established ..." sounds intuitive to me.
Lastly, I have also put a small number of Issues up and PRs in to address some minor aspects I have noticed, but other than ihesp/IPART#14 which makes some suggestions for minor formatting and wording amendments to the paper after my final read-through, none of these need to be addressed before the paper is published. I am leaving them as notes on potential future work.
Thanks all.
(That, as linked to this Issue, is now all of the PRs and Issues I wanted to raise, FYI. To recap, only ihesp/IPART#14 needs to be looked at before in my eyes the paper can be published; the rest are suggestions for further work as general improvements.)
@sadielbartholomew thank you for completing your review! I agree with your distinction between issues should be addressed before publication (https://github.com/ihesp/IPART/pull/14) and others.
@Xunius once you've addressed the comments by @sadielbartholomew above, please ping me here and I'll start the final steps in the JOSS review process.
@whedon generate pdf
@kbarnhart Thanks. https://github.com/ihesp/IPART/pull/14 has been merged.
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@Xunius I've made my final copyedits on ihesp/IPART/pull/17. Please review and merge them.
After merging, please do the following:
I can then move forward with recommending the submission be accepted.
@kbarnhart Thanks for the instructions.
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
@whedon check references
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5065/D6H70CW6 is OK
- 10.5281/zenodo.2586088 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.7717/peerj.453 is OK
- 10.1175/1520-0493(1998)126<0725:APAFMF>2.0.CO;2 is OK
- 10.1175/1520-0493(2004)132<1721:SACAOO>2.0.CO;2 is OK
- 10.3389/feart.2014.00002 is OK
- 10.1175/BAMS-84-9-1205 is OK
- 10.1175/1520-0434(2002)017<1152:ITDATO>2.0.CO;2 is OK
- 10.1175/JCLI-D-15-0216.1 is OK
- 10.1007/BF01386390 is OK
- 10.1016/S0146-664X(72)80017-0 is OK
- 10.1109/TPAMI.1986.4767851 is OK
- 10.1175/2010MWR3596.1 is OK
- 10.1002/2015JD024257 is OK
- 10.1111/j.1752-1688.2011.00546.x is OK
- 10.1175/JHM-D-13-02.1 is OK
- 10.1002/asl.392 is OK
- 10.1029/2012JD018027 is OK
- 10.1175/JCLI-D-13-00212.1 is OK
- 10.1175/2007JHM855.1 is OK
- 10.1175/MWR-D-11-00126.1 is OK
- 10.1175/JCLI-D-15-0655.1 is OK
- 10.1002/2014GL060299 is OK
- 10.1002/2014GL060881 is OK
- 10.1109/TGRS.2012.2211024 is OK
- 10.1002/qj.828 is OK
- 10.1175/2009MWR3193.1 is OK
- 10.1109/83.217222 is OK
- 10.1175/MWR-D-13-00168.1 is OK
- 10.1175/MWR-D-14-00288.1 is OK
- 10.1175/MWR-D-15-0279.1 is OK
- 10.1002/2017gl075495 is OK
- 10.1002/qj.49712354211 is OK
- 10.1175/JCLI-D-14-00567.1 is OK
- 10.1109/TSMC.1979.4310076 is OK
- 10.7717/peerj.453 is OK
- 10.1029/2019JD031205 is OK
- 10.1029/2018JD029180 is OK
- 10.1029/2019JD031218 is OK
- 10.1029/2019JD030936 is OK
- 10.5194/gmd-11-2455-2018 is OK
- 10.3390/atmos11060628 is OK
- 10.5194/gmd-13-4639-2020 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@whedon set v3.0.8 as version
OK. v3.0.8 is the version.
@whedon set 10.5281/zenodo.4164826 as archive
OK. 10.5281/zenodo.4164826 is the archive.
@whedon accept
Attempting dry run of processing paper acceptance...
:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1890
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1890, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
@Xunius I've now recommended that this submission be accepted. The JOSS handling editor in chief will manage final publication.
I agree with the reviewers that the package is much improved since submission. Congratulations on all of your hard work.
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):
OK DOIs
- 10.5065/D6H70CW6 is OK
- 10.5281/zenodo.2586088 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1038/s41592-019-0686-2 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.25080/Majora-92bf1922-00a is OK
- 10.7717/peerj.453 is OK
- 10.1175/1520-0493(1998)126<0725:APAFMF>2.0.CO;2 is OK
- 10.1175/1520-0493(2004)132<1721:SACAOO>2.0.CO;2 is OK
- 10.3389/feart.2014.00002 is OK
- 10.1175/BAMS-84-9-1205 is OK
- 10.1175/1520-0434(2002)017<1152:ITDATO>2.0.CO;2 is OK
- 10.1175/JCLI-D-15-0216.1 is OK
- 10.1007/BF01386390 is OK
- 10.1016/S0146-664X(72)80017-0 is OK
- 10.1109/TPAMI.1986.4767851 is OK
- 10.1175/2010MWR3596.1 is OK
- 10.1002/2015JD024257 is OK
- 10.1111/j.1752-1688.2011.00546.x is OK
- 10.1175/JHM-D-13-02.1 is OK
- 10.1002/asl.392 is OK
- 10.1029/2012JD018027 is OK
- 10.1175/JCLI-D-13-00212.1 is OK
- 10.1175/2007JHM855.1 is OK
- 10.1175/MWR-D-11-00126.1 is OK
- 10.1175/JCLI-D-15-0655.1 is OK
- 10.1002/2014GL060299 is OK
- 10.1002/2014GL060881 is OK
- 10.1109/TGRS.2012.2211024 is OK
- 10.1002/qj.828 is OK
- 10.1175/2009MWR3193.1 is OK
- 10.1109/83.217222 is OK
- 10.1175/MWR-D-13-00168.1 is OK
- 10.1175/MWR-D-14-00288.1 is OK
- 10.1175/MWR-D-15-0279.1 is OK
- 10.1002/2017gl075495 is OK
- 10.1002/qj.49712354211 is OK
- 10.1175/JCLI-D-14-00567.1 is OK
- 10.1109/TSMC.1979.4310076 is OK
- 10.7717/peerj.453 is OK
- 10.1029/2019JD031205 is OK
- 10.1029/2018JD029180 is OK
- 10.1029/2019JD031218 is OK
- 10.1029/2019JD030936 is OK
- 10.5194/gmd-11-2455-2018 is OK
- 10.3390/atmos11060628 is OK
- 10.5194/gmd-13-4639-2020 is OK
MISSING DOIs
- None
INVALID DOIs
- None
Thanks @kbarnhart. I reviewed the paper and all looks good.
@whedon accept deposit=true
Doing it live! Attempting automated processing of paper acceptance...
๐ฆ๐ฆ๐ฆ ๐ Tweet for this paper ๐ ๐ฆ๐ฆ๐ฆ
๐จ๐จ๐จ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! ๐จ๐จ๐จ
Here's what you must now do:
Party like you just published a paper! ๐๐๐ฆ๐๐ป๐ค
Any issues? Notify your editorial technical team...
@kbarnhart @rabernat @sadielbartholomew Before this gets closed, I'd like to thank all of you for your help in improving this little project. I learned a lot during the process.
I assume I can now add the DOI to the README file of the repo?
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[](https://doi.org/10.21105/joss.02407)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02407">
<img src="https://joss.theoj.org/papers/10.21105/joss.02407/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02407/status.svg
:target: https://doi.org/10.21105/joss.02407
This is how it will look in your documentation:
We need your help!
Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
@Xunius - see Whedon's guidance above on how to add the DOI to your readme โ๏ธ.
@sadielbartholomew, @rabernat - many thanks for your reviews here and to @kbarnhart for editing this submission.
@Xunius - your paper is now accepted into JOSS :zap::rocket::boom:
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[](https://doi.org/10.21105/joss.02407)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02407">
<img src="https://joss.theoj.org/papers/10.21105/joss.02407/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02407/status.svg
:target: https://doi.org/10.21105/joss.02407
This is how it will look in your documentation:
We need your help!
Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
Most helpful comment
@kbarnhart @rabernat @sadielbartholomew Before this gets closed, I'd like to thank all of you for your help in improving this little project. I learned a lot during the process.
I assume I can now add the DOI to the README file of the repo?