Submitting author: @imr-framework (Marina Manso Jimeno)
Repository: https://github.com/imr-framework/OCTOPUS
Version: 0.1.4
Editor: @jni
Reviewers: @puolival, @emilljungberg
Archive: Pending
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
Status badge code:
HTML: <a href="https://joss.theoj.org/papers/4627d3e6f4f5ba10d9203c7574145e77"><img src="https://joss.theoj.org/papers/4627d3e6f4f5ba10d9203c7574145e77/status.svg"></a>
Markdown: [](https://joss.theoj.org/papers/4627d3e6f4f5ba10d9203c7574145e77)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
@puolival, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jni know.
⨠Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest â¨
Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @puolival it looks like you're currently assigned to review this paper :tada:.
:warning: JOSS reduced service mode :warning:
Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews đż
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:
@whedon generate pdf
Reference check summary:
OK DOIs
- 10.2217/iim.10.33 is OK
- 10.1109/42.108599 is OK
- 10.1002/1522-2594(200102)45:2<269::AID-MRM1036>3.0.CO;2-5 is OK
- 10.1002/mrm.21599 is OK
- 10.1109/42.781014 is OK
- 10.1109/42.3926 is OK
- 10.1002/mrm.1910250210 is OK
- 10.1002/mrm.1910370523 is OK
- 10.1002/mrm.22428 is OK
- 10.1016/j.mri.2017.07.004 is OK
- 10.1016/j.neuroimage.2011.09.015 is OK
- 10.1109/TMI.2002.808360 is OK
- 10.1109/TSP.2005.853152 is OK
- 10.1016/j.mri.2018.03.008 is OK
MISSING DOIs
- None
INVALID DOIs
- None
@imr-framework @puolival fyi I'm having a bit of trouble finding a second reviewer, so I've decided to start the review early and add the second reviewer manually when I find them.
@puolival thank you again for agreeing to review, please see above for review instructions and a checklist, and please ping me if you have any queries at all!
@jni The URL for accepting the invitation (2. item on reviewer instruction list) doesn't work for some reason. I get the error message "Sorry, we couldn't find that repository invitation. It is possible that the invitation was revoked or that you are not logged into the invited account". (I am logged in so it must be some other reason.)
Hmm, @arfon @lpantano any ideas?
(@puolival sorry about that! :grimacing: :construction: :grimacing: hold please!)
@whedon re-invite @puolival as reviewer
OK, the reviewer has been re-invited.
@puolival please accept the invite by clicking this link: https://github.com/openjournals/joss-reviews/invitations
@puolival @jni - not sure what went wrong there. @puolival - could you try clicking the link above again now?
@arfon @jni It worked now and I got a message saying that I was granted push access. Thanks!
@puolival this is just a kind and gentle ping, since I don't see any tick marks above. đ
@jni I am almost finished with the review (1â2 more days), so I thought I would send all comments at once.
@whedon add @emilljungberg as reviewer
OK, @emilljungberg is now a reviewer
@emilljungberg thank you so much for agreeing to review the paper! At the top of this issue there is a reviewer checklist that you should make your way down. See also the section titled "Reviewer instructions & questions" and links therein. Any questions, just ping me on this issue!
I'll send my comments regarding this submission in two parts. In this first part, I have offered comments regarding the manuscript. In the second part, I will refer to the submission checklist and the GitHub repository.
Comments regarding the software paper
The manuscript needs to be improved before publication. I have offered below a number of comments, which are all intended as constructive criticism:
The abstract should be much expanded. Presently, it doesnât communicate very well what the paper is about. What are three implemented methods and why were they selected? Who is the target audience of this software (does every MR scientist need it)? What is the present state-of-the-art and broader context of this work? What kind of results can be obtained using the software? Please modify the abstract so that it will be easier for the reader to understand whether the work is relevant for her.
The software uses the NumPy, SciPy, NiBabel, Matplotlib, OpenCV, Pydicom, and PyNUFFT libraries but these have not been cited in the manuscript. Also, the source of the Shepp-Logan phantom image is not stated. The relevant citations should be added.
The reference to Nylund (2014) isnât complete. Please clarify whether KTH refers to KTH Royal Institute of Technology in Sweden.
In the acknowledgements section, it is stated that funding was received by Fessler but there is no Fessler on the list of authors.
The manuscript should be spell-checked. In the present version, there are typos at least in the summary and statement of need, e.g. âPyton-basedâ, âResoanceâ, âpropretiesâ, and âcallibratedâ.
What are Google colab notebooks? This is not explained in the manuscript.
Please explain why the simulation is relevant, in a manner that allows non-experts to understand why using a Shepp-Logan head phantom is reasonable. For example, is it simple but allow capturing the most relevant details? This and similar questions now remain unanswered for a broad audience.
Why would researchers want to run the software in a browser? This is stated to be a major benefit. However, to me it is not readily apparent why this is a positive thing. Would it not be better to incorporate the algorithms into existing pipelines, which would allow automation? Please discuss this further.
It is stated that âFurthermore, most of the available packages are also MATLAB-based, restricting the portability, accessibility and customization of the code (Ravi et al., 2018)â. However, MATLAB code can be customized equally well as Python code. It can be also made open-source. Due to these reasons, it is difficult to agree with this statement, unless the authors expand the discussion to pin-point that the previously developed packages are all closed-source. Otherwise I think such a general-level statement isnât justified in this context.
In the manuscript, we are now given a brief demonstration in which a ground truth is visually compared to three different corrected versions. However, quantitative data should be provided along the visual comparison so that it becomes possible to assess that the software is indeed able to perform the corrections appropriately. This can be reported for example as the increase in signal-to-noise ratio. Related to this, the reader is now also left wondering how much changing the signal-to-noise ratio would change the results. Therefore, I suggest performing the simulations for a range of noise levels. Further, if we take this to a broader level, how useful is the software generally to someone analyzing MR images? What kind of range of scenarios does it cover? Please briefly discuss the limitations of the presented software.
Please describe the functionality of the software in more detail. What is possible and what is not?
Edit: minor fix in one sentence and title.
This is the second part of the review, in which I have referred to the checklist above. I have now ticked in those items of the checklist which I think have been addressed satisfactorily (edit).
Contribution and authorship
The submitting author has contributed to the repository. However, @imr-framework doesnât appear to belong to any particular person but to a working group. While I think there is no reason to not allow this, I wasnât able to find a JOSS policy regarding the matter. Therefore, I havenât yet added a tick mark to this item, to notify the editor to consider how this should be handled.
Substantial scholarly effort
According to the JOSS guidelines, âReviewers should verify that the software represents substantial scholarly effort. As a rule of thumb, JOSSâ minimum allowable contribution should represent not less than three months of work for an individual.â
Based on the commits made to the repository, for which data is available on the page https://github.com/imr-framework/OCTOPUS/graphs/contributors, this requirement is only met for @Mmj94. The account @imr-framework has 5 commits totaling a net of 738 added lines. However, the majority of these is the GNU license file. @sairamgeethanath has added and removed 7 lines.
Documentation
Some unit tests have been provided but the test source code is mostly not commented. Therefore, I am unfortunately not able confirm that the tests are sufficient to verify the functionality of the software. I suggest the authors to describe what each test does, and what the tests within each file cover overall. Based on the number of tests and available information, it seems that the amount of testing might not yet reach what is expected in the JOSS guidelines.
Software paper
I have posted the comments regarding the manuscript as a separate message; see above.
@puolival
this requirement is only met for @Mmj94
This is a collective requirement for the paper as a whole, not meant to be interpreted as needing 3 months of work per author. So I think this means you agree that the requirement is met?
@jni Sorry, I thought it was per person. I do agree that the requirement is met collectively. I have now updated the checklist.
Edit: If possible, I suggest this instruction in the JOSS guidelines to be reworded to be clearer. I was also uncertain if software refers to software + software paper, or only the former.
@emilljungberg I see your checklist filling up, and now those issues on the source repo â awesome, thanks for that!
@puolival thank you also for the review above.
@imr-framework @Mmj94 I think at this point we are waiting for some fixes/improvements/responses on your end. Please let us know if you need additional guidance or clarification!
@jni @emilljungberg @puolival Thank you so much for your comments and reviews.
Sorry for the delay in my response, I just came back from vacation but I'll definitely start working on all the issues asap.
The OCTOPUS
software package consists of python implementations of three different off-resonance correction methods for MRI reconstruction. The package was designed with the intention of making implementation of these methods easier to use, with the motivation that current available software packages, typically in MATLAB, are not flexible enough to accommodate different acquisition strategies. I think the intention behind this work is really good but both the manuscript and the code need improvement before it can fill the gap that the authors have identified.
The python code is well structured into sub-packages and modules with very clear documentation for each function. Many functions have been developed to simplify the interface for the end user, such as im2ksp
and ksp2im
, as well as generalized data input functions get_data_from_file
. However, in doing so, I think the authors end up in the same trap that they tried to avoid, namely not making their tool flexible to all types of data and acquisition strategies. These concerns are highlighted more specifically later in the review.
The demos provided in the OCTOPUS
repository clearly demonstrates that the tool is able to correct off-resonance artefacts with the three methods provided. However, it is not clear to me if OCTOPUS
is intended to be an end-to-end reconstruction package which includes off-resonance correction, or as a tool for use together with other reconstruction toolboxes. As a complete reconstruction package, OCTOPUS
only implements the off-resonance correction and wrappers for 2D-FFT and NUFFT and lacks the advanced reconstruction methods used today such as parallel imaging and compressed sensing.
In the sections I provide comments on the manuscript, the code and some concluding recommendations.
OCTOPUS
is refered to as a zero-footprint software since it can be run in a web-browser. I'm not familiar with the term zero-footprint software and again it is not clear how a web-browser implementation helps in a MR reconstruction workflow which requires reading and writing of raw data, tools which are typically proprietary and needs to be run on a local machine.Off-resonance is an MR artifact
whose source can vary fromwhich can be produced by for instance field inhomogeneities ,and todifferences in tissue susceptibilities and chemical shift
These phenomena can cause the phase of off-resonant spins
(not at the resonant or Larmor frequency) spinsto accumulate along the read-out direction.
Users of this toolbox will be familiar with the term __off-resonant spins__.
__Summary:__ Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
It would be good to highlight the three specific methods that this package includes and what the limitations are such as 2D/3D images and Cartesian/non-cartesian trajectories.
__A statement of need:__ Do the authors clearly state what problems the software is designed to solve and who the target audience is?
It is clear that the software corrects off-resonance artefacts but it could be clarified who the software is for and what the typical use of the software would be. Is it intended to mainly operate on reconstructed images or on raw k-space data?
__State of the field:__ Do the authors describe how this software compares to other commonly-used packages?
I think this needs to be clarified in the manuscript. It is stated that available packages are limited and highly specific but without specifying in which way they are specific and how OCTOPUS
fills this gap. Considering that off-resonance correction is intended to remove an image artefact, and that it is a computationally heavy problem to solve, comparison to existing packages in terms of image quality and processing time would be useful to evaluate the package.
__Quality of writing:__ Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
The language and writing style in the manuscript is generally of good quality. As mentioned by @puolival, there are a few typos that needs to be checked.
__References:__ Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
It would be good to extend the reference list with URLs to the available code for the other methods listed in the statement of needs section.
skimage.data
to reduce the data size in the repository. It would also enable more flexible demos.fmap_recon
appears to only be available for Cartesian data.SSIM_calc
and pSNR_calc
functions use the data range of the second image to set the window range for the calculation of SSIM and pSNR. If the two input images have very different data range this could skew the results. The avoid this, it could be better to use the scikit
function directly instead, especially given that SSIM_calc
and pSNR_calc
only wraps this function and simplifies one parameter call. This eliminate any potential errors.ORC
methods in the ORC
module (i.e. for CPR, fs-CPR, and MFI).Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
Based on an assessment of the code for this package, I do not think the current code base satisfies the JOSS scope eligibility criteria. The core of this package is in the ORC
module which implements the three off-resonance correction modules in less than 500 lines of code, much of which is doc strings and repeated lines of code for data input. Much of the repository consists of code to demonstrate the use of the toolbox or wrappers to simplify function calls such as FFT or image metric calculations from other packages. Data input/output methods that would be required to make this a powerful package are too simple to be useful, or too specific to the authors use case to be generalisable.
However, I think there is potential for this package to be really useful but it would, in my view, require either extending the package or pivoting in one of a few different ways, which I have outlined in the last section (Suggestions for moving forward)
Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)
There are no comments on the performance of the code which would have been good to include. A comparison between the different methods would be really useful for users, given that the CPR method is considerable slower than fs-CPR and MFI (~20min for a single slice and coil, compared to about 20s for the two other methods).
Again regarding the use of this tool for image or k-space data, would it work with parallel imaging undersampled data where a direct FFT/NUFFT of the data would result in folding artefacts? I think this needs to be clarified as almost all acquisitions use some type of image acceleration.
Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
There is a very nice read the docs API documentation provided for the repository. It would however be good to be more specific regarding the different data structures used. For instance:
Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
The authors have developed unit tests for the core functionality but it could be expanded. In particular it would be good to test the ORC for input data size (1D/2D/3D/4D) and for input data type (magnitude/complex)
To make the best use and gain the widest traction of the effort put into this software package, I think it would be good to distinguish between correction of image data and k-space data, as the latter requires additional advanced reconstruction methods.
For an image correction method, the end user would need a robust data input and output method, typically reading either dicom or nifti, together with a field map input or data from which a field map can be reconstructed. For this to be really useful, it would be good to have a python script which can take the input data as commandline argument, choose the correction method, and return a corrected image. That would take this tool to a useful research tool that people could use and modify to their preferences.
For a k-space based correction method on the other hand, the user would need to know how to integrate this tool with other reconstruction toolboxes to produce their final image. A root sum of squares coil combination is rarely sufficient these days and so there needs to be a clear description of how to integrate this method with other reconstruction toolboxes, such as the sigpy
framework in python (which I have no affiliation to). To enable this, there would need to be a bit more documentation of the data structures. Also, for processing of raw data, would the methods work with parallel imaging data where a direct FFT of the data results in folding artefacts? For a reconstruction framework, this is a crucial part as almost all acquisitions today use some kind of acceleration.
Overall, the functionality of the methods in OCTOPUS
is clear from the examples, but the functionality in terms of compatible input data and how to integrate it into a research framework is not as clear to me. With additional documentation and some development of the code I think this can be resolved.
@Mmj94 no pressure from this end, but just want to clarify that we are waiting for input from you at this point, I think! Please let us know if you need any further clarifications, or if you have questions about specific comments.
@jni thanks for the reminder! I have closed some issues in the Github repo and will be addressing other code suggestions/comments with some changes I'll be pushing along the coming week!
Thank you all so much for your patience and again thank you for all your comments and suggestions, I believe they have significantly improved the quality of this tool.
I believe I have addressed all the code comments with my last push to the dev
branch. Please have a look at it and let me know if something else needs to be done for acceptance of the paper.
Once the checklists are completed regarding the code, I'll do a pull request to the merge branch and update the ReadTheDocs page, readme, and PyPI
package.
There are still two open issues in the repository. I'm hoping Issue #3 will be closed once the manuscript paper is updated. To close Issue #4 I have added some helper functions to check for input dimensions while serving as documentation at the same time, please have a look at it and let me know if further action is needed.
In response to @emilljungberg specific question regarding parallel imaging data, unfortunately at this early stage, OCTOPUS
won't be able to correct it.
I'll start addressing the manuscript issues and hopefully will get back to you soon with an update.
Thanks for the update @Mmj94! Looking forward to the last few items being resolved and getting this in!
Hi all! As per usual, no pressure from me, but just want to make sure things are chugging along and that no one needs help unblocking something!
Hi all, thank you again for your patience during this long process.
I believe we've finally implemented all of your comments in both the manuscript and code. Everything is now updated in the repository.
For the comments related to the manuscript, I've created a summary document that I'm attaching here.
Please let me know if any other action is required from my side.
Thanks again!
@Mmj94 Could you please recompile the manuscript? I'd like to compare the response to reviewers to the updated manuscript. Thanks!
@whedon generate pdf
PDF failed to compile for issue #2578 with the following error:
/app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon/author.rb:72:in block in build_affiliation_string': Problem with affiliations for Marina Jimeno, perhaps the affiliations index need quoting? (RuntimeError)
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon/author.rb:71:in
each'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon/author.rb:71:in build_affiliation_string'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon/author.rb:17:in
initialize'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon.rb:205:in new'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon.rb:205:in
block in parse_authors'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon.rb:202:in each'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon.rb:202:in
parse_authors'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon.rb:93:in initialize'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon/processor.rb:38:in
new'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/lib/whedon/processor.rb:38:in set_paper'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/bin/whedon:58:in
prepare'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/command.rb:27:in run'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/invocation.rb:126:in
invoke_command'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor.rb:387:in dispatch'
from /app/vendor/bundle/ruby/2.4.0/gems/thor-0.20.3/lib/thor/base.rb:466:in
start'
from /app/vendor/bundle/ruby/2.4.0/bundler/gems/whedon-d14a699185fb/bin/whedon:131:in <top (required)>'
from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in
load'
from /app/vendor/bundle/ruby/2.4.0/bin/whedon:23:in `
@whedon generate pdf
:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:
Most helpful comment
@jni I am almost finished with the review (1â2 more days), so I thought I would send all comments at once.