Submitting author: @viktor76525 (Viktor)
Repository: https://github.com/ucgmsim/GMSimViz
Version: 1.3
Editor: @lheagy
Reviewer: @leouieda, @hugoledoux
Archive: 10.5281/zenodo.2590852
Status badge code:
HTML: <a href="http://joss.theoj.org/papers/f84c1d05ea939f193fa0b74bb4c2b1c2"><img src="http://joss.theoj.org/papers/f84c1d05ea939f193fa0b74bb4c2b1c2/status.svg"></a>
Markdown: [](http://joss.theoj.org/papers/f84c1d05ea939f193fa0b74bb4c2b1c2)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
@leouieda & @hugoledoux, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @lheagy know.
paper.md
file include a list of authors with their affiliations?paper.md
file include a list of authors with their affiliations?Hello human, I'm @whedon. I'm here to help you with some common editorial tasks. @leouieda, it looks like you're currently assigned as the reviewer for this paper :tada:.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 馃樋
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
PDF failed to compile for issue #808 with the following error:
/app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/lib/whedon.rb:91:in block in check_orcids': Problem with ORCID (0000-0000-0000-0000) for Viktor Polak (RuntimeError)
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/lib/whedon.rb:89:in
each'
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/lib/whedon.rb:89:in check_orcids'
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/lib/whedon.rb:70:in
initialize'
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/lib/whedon/processor.rb:32:in new'
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/lib/whedon/processor.rb:32:in
set_paper'
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/bin/whedon:37:in prepare'
from /app/vendor/bundle/ruby/2.3.0/gems/thor-0.20.0/lib/thor/command.rb:27:in
run'
from /app/vendor/bundle/ruby/2.3.0/gems/thor-0.20.0/lib/thor/invocation.rb:126:in invoke_command'
from /app/vendor/bundle/ruby/2.3.0/gems/thor-0.20.0/lib/thor.rb:387:in
dispatch'
from /app/vendor/bundle/ruby/2.3.0/gems/thor-0.20.0/lib/thor/base.rb:466:in start'
from /app/vendor/bundle/ruby/2.3.0/bundler/gems/whedon-e0f72c5e8125/bin/whedon:99:in
load'
from /app/vendor/bundle/ruby/2.3.0/bin/whedon:22:in
馃憢 Thanks @leouieda and @hugoledoux for being willing to review! This issue has a checklist for each of you in the main issue thread to help guide the review. Please let me know if you have any questions or if I can help clarify anything!
@lheagy @viktor76525 sorry for the huge delay. I'll try to get some time for the review this week.
there is no release
I accepted thinking, perhaps foolishly, that I could just install it under macOS. But after trying to install a few libraries, I just saw this: https://github.com/ucgmsim/GMSimViz#other-systems
Is this because you've never tried, or because you know it's impossible?
Also, I could install most of the packages, but sometimes it's impossible to know which one you mean. Like with MPI, under macOS one can do this and that works:
brew install open-mpi
pip install mpi4py
But it would be nice to know which python package (URL in pypi) is necessary.
For FFMpeg, there are many of them, which ones do you use? I couldn't figure it out.
Also, stating Python >2.6 is false. I started with the current version of Python (3.6.5), and noticed that you mean Python2 only... Python2 is deprecated from next year I believe, why still use it?
So I'd like to know if possible under macOS before.
But I have the feeling that if only Linux, then can it be accepted?
What are the rules of JOSS?
@lheagy : can you help us here?
Also, even if only Linux, the installation of the dependencies is very "hacky", ie one must fiddle a lot... Is it possible to streamline the installation?
Hi @hugoledoux, thanks for your comments. To answer your question, we can accept platform-specific packages. If it runs only on Linux, that is okay. However, if having a workflow to install on other systems (e.g. mac) is not overly challenging, I would encourage that to be added to the docs (even if there is a disclaimer that the workflow is a bit hacky) - This would increase the potential audience and user-base of the package.
Regarding Python 2 vs Python 3 - we can accept a python 2 specific package. In the docs or issues on the repository, any plans to upgrade to python 3 (or not to) should be explained. The community is moving to Python 3 and Python 2 will be losing support, so at least having a plan to migrate over would be important for longevity.
I hope this helps.
馃憢 Hi @leouieda, @hugoledoux, @viktor76525: Just checking in to see how things are going
Sorry for the delay in response. I was away for a few weeks but am now back.
@hugoledoux is right, there should be no reason why it doesn't work on OSX, we had no machines to test on, unsupported means we cannot give instructions for those who are not confident with their environment. Technically the instructions under Linux are for apt based distributions.
Python >2.6 is for Python 2, not 3. ie 2.6<=python<3, if this is an issue I can go through and make necessary changes. The reason for development was organisational resistance which I agree is nonsense.
MPI is required in Python, therefore mpi4py is required, I thought open-mpi is a dependency of mpi4py. I can go through and check if it is not clear.
Any FFMPEG installation should be fine, I doubt you can find a recent build that doesn't work. You are right that the description is too detailed as it includes default-on compile time options and therefore will confuse everyone. On some supercomputing environments it is difficult to build a complete ffmpeg so in this case it was important to know the absolute minimum requirements. These can be removed from the instructions or made clearer if you wish.
Right now I can't install the software so difficult for me to review it...
So yes if you could check what is necessary (the URL of the pacakge used) then I can give it another go.
Thanks for the suggestions @hugoledoux
GMSimViz now works with Python 3 (3.6 tested with the sample dataset)
The installation has been streamlined (no more manual editing of the text file)
I will quickly add the URLs for the Python packages listed but note that they require the backends too, these should be installed automatically if using a package manager.
:wave: Hi @leouieda, just checking in. Would you have time in the next week or so to get started on this review? Thanks!
Hi @lheagy and @viktor76525, sorry for the long delay. I'll post my comments here and also open issues in the repository.
@whedon commands
Here are some things you can ask me to do:
# List Whedon's capabilities
@whedon commands
# List of editor GitHub usernames
@whedon list editors
# List of reviewers together with programming language preferences and domain expertise
@whedon list reviewers
馃毀 馃毀 馃毀 Experimental Whedon features 馃毀 馃毀 馃毀
# Compile the paper
@whedon generate pdf
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@lheagy and @viktor76525, I've gone through the review checklist and some things are missing or need improvement. This is a summary of my comments. See the related issues for more details.
I short, the biggest issues that need to be addressed are regarding the lack of documentation and tests. The documentation can be added easily by the developers but testing will likely require refactoring of the code.
The resulting animations are very impressive! I had no idea that this could be done with GMT and it's very exciting to see someone pushing the boundaries of what the software can do. I think that with a bit of effort this will be a useful package for the geoscience community. The authors will not regret putting in the time to refactor the code for testing and packaging and creating top quality documentation. This is the kind of thing that takes a while to learn how to do but it pays off in the end because you have more confidence in your own code and avoid an lot of user emails asking simple questions :smile:
馃憢 Hi @viktor76525, just checking in on your progress with the comments from @leouieda's review. Please keep us updated!
Minor changes fixed, samples to run include quicker options (2 minutes to 12 minutes instead of 45 minutes). The output for these and the original are available to compare.
Will expand the documentation a bit.
Can break long functions into smaller ones to an extent.
Do not intend to create unit tests, will go with the option to give sample/test data instead.
Thanks for your update @viktor76525: please keep us posted on your progress and check off the items as you address them
馃憢 Hi @viktor76525, just checking in. How are things going?
Hi @lheagy, just have to finish a bit of refactoring for issue #10. About 80% is done now.
@lheagy and @leouieda I have closed all the issues, ready for checking over.
Thanks @viktor76525, @leouieda and @hugoledoux would you mind taking another look when you have a chance?
:wave: Hi @leouieda and @hugoledoux, just checking in: when do you think you will have time to take another look at this submission? Thanks!
馃憢 Hi @leouieda and @hugoledoux, thanks for your first round of reviews. Would you have time to take a look through @viktor76525's changes? Thanks!
Hi @viktor76525, just a quick update for you: I have been in touch with @leouieda and @hugoledoux via email. They will jump back in as soon as they can. This is a busy time of year for everyone!
ok @viktor76525 I tried with a clean install under macOS and Python 3.6.5.
sample_data/fault.srf
I get a very simple error:(gsim2) wlan-145-94-215-233:bin hugo$ ./gmsimviz ../sample_data/fault.srf
Traceback (most recent call last):
File "./gmsimviz", line 1898, in <module>
meta, poi_srf = load_meta(args)
File "./gmsimviz", line 1216, in load_meta
planes = srf.read_header(args.srf_file, idx=True)
File "/Users/hugo/.pyenv/versions/3.6.5/envs/gsim2/lib/python3.6/site-packages/GMSimViz-1.2-py3.6.egg/gmsimviz/srf.py", line 83, in read_header
for _ in xrange(nseg):
NameError: name 'xrange' is not defined
ok I realised that there's no xrange() function in Python3... I guess the port to Python3 has not been made for everything?
Thanks for pointing that out @hugoledoux . Last time Python 3 was meant to have been tested, Python 2 must have managed to sneak in because there is no way it would have worked.
All should be good with Python 3 now, I have tested the first and last samples with Python 3.6.5.
sorry for delay again, the python3 error is gone but no visualisation and no idea if that is "normal":
$ ./gmsimviz ../sample_data/fault.srf
grdimage: Error: (x_max-x_min) must equal (NX + eps) * x_inc), where NX is an integer and |eps| <= 0.0001.
grdimage (gmtlib_read_grd_info): Use grdedit -A on your grid file to make region and increments compatible [/Users/hugo/temp/GMSimViz/resources/Topo/srtm_all_filt_nz_i5.grd]
grdimage: Error: (x_max-x_min) must equal (NX + eps) * x_inc), where NX is an integer and |eps| <= 0.0001.
grdimage (gmtlib_read_grd_info): Use grdedit -A on your grid file to make region and increments compatible [/Users/hugo/temp/GMSimViz/resources/Topo/srtm_all_filt_nz.grd]
Hi @hugoledoux, that is normal as of ~5.4.4 and I think it doesn't classify as an error in 5.4.3.
That particular command (first example) only produces a static image (png). You should have a file fault_perspective.png
where you run the command from. If it has not created the image, were there any other messages?
oh okay it did work then. Perhaps writing this in the doc for newbies like me if a good idea though? I saw the ERROR and thought things weren't working...
I'll finish the review soon then.
Some things that need to be fixed/modified:
maybe a bit more clarity about what the output is and that there'll be error
I've made all the changes.
ok, great.
@lheagy as far as I am concerned, all is good. Except the automatic tests, but is this mandatory. Also, I can't really test whether the software produces good output, I am no geologist and I have no idea what it produces... But installation and documentation are fine. Hope that helps!
Thanks @hugoledoux!
@viktor76525: testing is a mandatory component for acceptance. Ideally, these are automated tests, but is also acceptable to have documentation outlining steps that the user can take to test and verify that the software is functioning as expected. @leouieda provided some suggestions in ucgmsim/GMSimViz#10 for setting up automated tests. Many of the functions in geo.py are well-suited for setting up some simple tests as suggested.
Even if you do not unit-test the software as suggested in that issue, you could still implement tests that test the output of larger workflows.
@viktor76525: In order to give us an idea of how you plan to proceed, would you mind opening a new issue in your repo or re-opening ucgmsim/GMSimViz#10 and sketching out how you plan to proceed with testing of the software?
Hi @lheagy.
I was hoping to go for the "OK" section at https://joss.readthedocs.io/en/latest/review_criteria.html
Good: An automated test suite hooked up to an external service such as Travis-CI or similar
OK: Documented manual steps that can be followed to check the expected functionality of the software (e.g. a sample input file to assert behaviour)
Bad (not acceptable): No way for you the reviewer to check whether the software works
I believe the GMSimViz page does contain 3 such manual steps (sample commands) under the Sample Data section along with expected outputs.
Okay, I will bring this to the editorial team for input.
Hi @viktor76525, I brought this up with the editorial board and the consensus is that at a minimum, we require that there be an objective way to test the code. Running an example and eye-ball comparing the output to a video is a subjective test.
The reason that we require at least some objective testing is so that if you or a contributor makes a change to the code, how do you know it didn't break? Since you refactored the code into smaller functions, some of the core functionality should be much easier to design tests for. You do not need to test the entire code-base to meet JOSS criteria, but at a minimum, there should be a mechanism for someone to assess if core functionality is working as expected.
Please let me know if I can provide further clarification.
@viktor76525 I've been thinking about ways that you can check that the end result of the software remains the same when you change the code (a regression test). This is not as good as testing the individual functionality of all pieces of the code but it can at least give you the piece of mind of knowing that your changes didn't break the existing functionality.
One way you could do this is to:
compare
command from imagemagick to calculate the difference between the files you stored and new output from the software.This wouldn't require any testing framework or much extra coding. A shell script that run the program and compares all images should be relatively straightforward. Let me know if you want help implementing any of this.
@lheagy would this be enough to satisfy the JOSS requirements?
Thanks @leouieda, I'll have a look into it. So far I've made pytest tests for the smaller library functions.
Great suggestion @leouieda! Yes, this would satisfy the JOSS requirements.
So far I've made pytest tests for the smaller library functions.
That's great! Don't hesitate to ask if you need any feedback or help with that too. The regression test can also be useful even if you have unit tests as well.
There are unit tests that can be run by:
python -m pytest
and an image comparison test:
tests/gmsimviz_compare.py
Excellent @viktor76525! @leouieda, would you mind taking a look at the tests that have been added?
:wave: Hi @leouieda and @hugoledoux, would either of you have time to take a look at the testing additions this week? If not, please let me know and I can go through. Thanks!
@lheagy I'm still on the road so it's a bit tough to try to run the tests. Sorry for the delay.
Hi @leouieda, no worries, thanks for getting back to me. I can go through this later this week
Hi @viktor76525, the tests are looking good. I just ran into the error noted here: ucgmsim/GMSimViz#19. Would you mind taking a look?
Should be good now @lheagy. I have put details in the issue page.
Thanks @viktor76525. @leouieda: would you be willing to do a final pass?
:wave: Hi @leouieda, @hugoledoux: would either of you be willing to do a final pass here?
@lheagy @viktor76525 sorry for the really long delay. I'm getting caught up now. I went through the changes and am trying to run the tests and examples. I ran into a few issues (ucgmsim/GMSimViz#20 ucgmsim/GMSimViz#21 ucgmsim/GMSimViz#22 ucgmsim/GMSimViz#23) with the tests and missing dependencies. I'm still trying to run the examples and the image comparison but it's taking quite a while. I'll report back when/if the programs finish running.
Overall, I like the changes made to the code itself. It's much easier to find where things are and understand the flow. Nicely done!
@leouieda Thanks for the pytest-mpl suggestion. I have closed all issues and ready for a (hopefully) quick look.
@viktor76525 I went through most of the changes and I can get the tests to run now with pytest-mpl. What about the gmsimviz_compare.py
file? Is that not part of the test suite?
I'm trying to run the examples but I'm getting some errors (see ucgmsim/GMSimViz#24).
@lheagy @viktor76525 I've gone through the submission one more time and managed to install, get all tests passing, and run the examples. The code has improved a lot since submission and is following a much more standard Python project layout (besides some minor issues with making it pip-installable which we're discussing in ucgmsim/GMSimViz#24).
I don't have any more comments to add and to me the submission ticks all the required boxes.
Many thanks @leouieda and @hugoledoux for the effort you invested in conducting the review!
And congrats @viktor76525! To proceed with the publication process, can you please make a new release and create an archive on zenodo or similar? Please make sure the title and author list match that on your paper, and then post the doi here. Thanks!
Thank you @leouieda @hugoledoux and @lheagy!
Uploaded at: https://zenodo.org/record/2590852
doi: 10.5281/zenodo.2590852
@whedon set 10.5281/zenodo.2590852 as archive
OK. 10.5281/zenodo.2590852 is the archive.
@whedon set 1.3 as version
OK. 1.3 is the version.
:wave: Hi @openjournals/joss-eics, this submission is ready to be accepted 馃帀
@whedon check references
Attempting to check references...
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@whedon accept
Attempting dry run of processing paper acceptance...
PDF failed to compile for issue #808 with the following error:
Can't find any papers to compile :-(
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
@whedon accept
Attempting dry run of processing paper acceptance...
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/558
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/558, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
The reference is missing the Year field. Please add.
https://doi.org/10.1002/2013EO450001
@labarba year added in repo.
@whedon accept
Attempting dry run of processing paper acceptance...
PDF failed to compile for issue #808 with the following error:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 12 0 12 0 0 80 0 --:--:-- --:--:-- --:--:-- 81
sh: 0: getcwd() failed: No such file or directory
sh: 0: getcwd() failed: No such file or directory
pandoc: 10.21105.joss.00808.pdf: openBinaryFile: does not exist (No such file or directory)
Looks like we failed to compile the PDF
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
@whedon accept
Attempting dry run of processing paper acceptance...
```Reference check summary:
OK DOIs
MISSING DOIs
INVALID DOIs
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/559
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/559, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
@whedon accept deposit=true
Doing it live! Attempting automated processing of paper acceptance...
馃毃馃毃馃毃 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 馃毃馃毃馃毃
Here's what you must now do:
Party like you just published a paper! 馃帀馃寛馃馃拑馃懟馃
Any issues? notify your editorial technical team...
Congratulations, @viktor76525, your JOSS paper is published!
Sincere thanks to the editor: @lheagy, and the reviewers: @leouieda, @hugoledoux 馃檹 We couldn't do this without you!
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[](https://doi.org/10.21105/joss.00808)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.00808">
<img src="http://joss.theoj.org/papers/10.21105/joss.00808/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.00808/status.svg
:target: https://doi.org/10.21105/joss.00808
This is how it will look in your documentation:
We need your help!
Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
Most helpful comment
Many thanks @leouieda and @hugoledoux for the effort you invested in conducting the review!
And congrats @viktor76525! To proceed with the publication process, can you please make a new release and create an archive on zenodo or similar? Please make sure the title and author list match that on your paper, and then post the doi here. Thanks!