Joss-reviews: [REVIEW]: Open Source Optical Coherence Tomography Software

Created on 17 Aug 2020  ยท  59Comments  ยท  Source: openjournals/joss-reviews

Submitting author: @spectralcode (Miroslav Zabic)
Repository: https://github.com/spectralcode/OCTproZ
Version: v1.0.0
Editor: @arfon
Reviewers: @jdavidli, @brandondube
Archive: 10.5281/zenodo.4148992

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/539ea5d7842ff0a7607a4a405ea69730"><img src="https://joss.theoj.org/papers/539ea5d7842ff0a7607a4a405ea69730/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/539ea5d7842ff0a7607a4a405ea69730/status.svg)](https://joss.theoj.org/papers/539ea5d7842ff0a7607a4a405ea69730)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@phtomlins & @jdavidli, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @arfon know.

โœจ Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest โœจ

Review checklist for @brandondube

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@spectralcode) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @jdavidli

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@spectralcode) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
C++ GLSL QMake accepted published recommend-accept review

All 59 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @phtomlins, @jdavidli it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews ๐Ÿ˜ฟ

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
Reference check summary:

OK DOIs

- 10.1364/boe.3.003067 is OK
- 10.1364/oe.18.011772 is OK
- 10.1117/1.3548153 is OK
- 10.1117/1.JBO.17.10.100502 is OK
- 10.1109/fccm.2011.27 is OK
- 10.1117/1.JBO.18.2.026002 is OK
- 10.1364/OE.18.024395 is OK
- 10.1364/BOE.5.002963 is OK

MISSING DOIs

- None

INVALID DOIs

- None

@phtomlins, @jdavidli - please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

Any questions/concerns please let me know.

@whedon add @brandondube as reviewer

OK, @brandondube is now a reviewer

:wave: @spectralcode good news at the start - the 1.2 release opens on my desktop with no issue. I did not easily find an example file in the repository or referenced in the documentation. Is there one available to demo the software with?

Hi @brandondube thank you very much for reviewing!
That's great that the software opens without issues. To get a link to a test data set, please have a look at the "Download and Installation" section in the readme.md. The same link can also be found in the quick start guide of the user manual.

Thanks for pointing out the demo data, I missed it the first time. Here are my comments:

I want to discuss performance at the outset, since it is a major focus of this work. My computer is as follows:

  • i7-9700k
  • 32GB of RAM
  • GTX 2080
  • W10 pro

If I disable all processing and display, then the performance yield is:

  • 4 volumes per second
  • 64 buffers per second
  • 1020 B-scans per second
  • 5.2e7 A-scans per second
  • 26MB buffer
  • 1.7GB/s data throughput

A 2080 is about equivalent to a 1080 Ti, if I remember correctly. So this seems in-line with your lab system, modulo the OS which can make a large difference. All my performance numbers are in-line, except the number of volumes per second. Why? I have no graphical displays up and no processing done.

W.r.t the paper and the code..

Paper

Last line of section 5 should be "being processed."

In Python JOSS submissions, it is customary to provide citations for numpy, etc. You do not have one for Qt or Cuda. if there are equivalents, please cite them. Perhaps if there are no academic citations, citing a product page or similar.

Do you have permission to use the EU and regional development fund logos?

I did not see a comparison to other competing technologies or justification for C++/Qt as the technologies chosen. This is not required, but is usually nice to see. For example, implementation with python and CuPy probably would have a much lower barrier to entry for scientists.

Software / Docs

โœ”๏ธ sample extensions provided

The manual feels a bit light. Your paper has a major focus on user extensibility and open source contributions from others. I feel that not having a developer guide or similar is a strong impediment to success in that area.

It is unclear to me when taking a glance at the code how the internal dependencies work. E.g., processing.h/cpp can see much of the application and (I think) nearly all of the data. I think this goes along with a need for a developer guide.

The example dataset could be more boldly pointed out (I missed it the first time).

I also did not find a test suite.

The remaining check boxes on my review list are related to these points - I will wait for your feedback.

Cheers

@brandondube thank you very much for your feedback!
Performance:
Thank you for testing the performance and providing this information!
Have you used the same data dimensions and the same batch size as stated in the paper? (1024 samples per raw A-scan, 512 A-scans per B-scan, 256 B-scans per volume and batch size: 256 B-scans per buffer)

Paper:

Last line of section 5 should be "being processed."

→ Fixed!

You do not have one for Qt or Cuda. [...] please cite them.

→ Done! I couldn't find any other JOSS paper citing Qt or CUDA so I am not sure if the way i cited it is the common way to do it. Please let me know if you would have done it differently.

Do you have permission to use the EU and regional development fund logos?

→ Thank you very much for pointing this out. I am even obliged to use the logo. In addition, the logo must be at least as large as the largest other logo. So maybe I have to enlarge it as the JOSS logo is quite big. I will talk with the administration regarding the logo size and maybe there is a way out not to use the logos at all as it seems quite unusual to me.

For example, implementation with python and CuPy probably would have a much lower barrier to entry for scientists.

→ Yes, I think you are right about that! There is an idea for the long run to extend the plugin system such that it enables scripting with Python, but no evaluations have been made yet in this regard. At the start of the project it did not cross my mind to use Python for a full blown desktop application that has specific requirements on memory management, processing speed and should be able to control all kinds of hardware. I knew all of these requirements could be met by C++ and CUDA, so I went for it.

Software / Docs:

I feel that not having a developer guide or similar is a strong impediment to success in that area.

→ This is a great comment. Thank you for pointing this out! I feel the same way, so here is the developer guide.

The example dataset could be more boldly pointed out (I missed it the first time).

→ Done! It now has its own section.

I also did not find a test suite.

→ There is none. I hoped that providing test data and a detailed step by step guide how to use it would be sufficient. See the JOSS review criteria for Tests.

@whedon generate pdf

Just a small update: I received feedback regarding the logo and it should stay the way it is.

I do not remember the precise settings I used for the sample data configuration ๐Ÿ˜… - been too many days. I used the settings listed in the documentation where it is referenced. If those are different to the settings in the paper, then the perf may differ.

No sweat on the logos - if you are required to use them by your funding agencies, that is quite a strong "permission to use." We would not want to be in a situation where JOSS receives a C&D or similar on your paper over a logo; glad it's sorted.

The developer guide looks solid.

Regarding tests - I had in mind something like a GUI automation script that can verify the software correctly performs some action. Your software meets the standard for JOSS, but there is nothing wrong with exceeding the standard :)

LGTM

-Brandon

@whedon add @jrasakanthan as reviewer

OK, @jrasakanthan is now a reviewer

@whedon remove @phtomlins as reviewer

OK, @phtomlins is no longer a reviewer

:wave: @jrasakanthan - thanks for agreeing to take the place of @phtomlins as a reviewer here. Please carry out your review in this issue by updating the checklist above and giving feedback in this issue. The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html

Any questions/concerns please let me know.

@spectralcode
I've been doing testing on a computer with the following specs:
i9-9900X
RTX 2080Ti
64 GB RAM
Windows 10 Pro

I got similar performance numbers to @brandondube
Without 3D
3.7505 volumes/second
60 buffers/second
960 B-scans/second
A-scans/second 491520
Buffer size [MB] 26
Data throughput [MB/s] 1560

With 3D
3.7505 volumes/second
60 buffers/second
960 B-scans/second
A-scans/second 491520
Buffer size [MB] 26
Data throughput [MB/s] 1560

I did notice that the settings described in the documentation differs from what's described in the paper. The performance numbers above are using the settings from the documentation. When I tried using the settings in the paper, the application seems to hang for a few seconds and then just completely crashes.

@jdavidli thank you very much for testing! I'm not sure why your are not able to replicate the processing rates from the paper. Here is some information that may help to clear things up:

  • The settings described in the documentation are chosen such that the test data works on almost all GPUs.
  • The settings described in the paper are optimized for the used GPU.
  • The buffer size has major effect on processing speed. If it is to small the processing may be slower than possible. If it is too large the application may crash.
  • A larger buffer size results in higher GPU memory usage, which can exceed the available memory on the used GPU (this usually causes the application to crash)

In the paper I used 1024 samples per raw A-scan as this is a more common value than 1664.
With the provided test data set you could also use 1024 samples per raw A-scan to verify the processing rates described in the paper but the resulting OCT images will look distorted of course. By the way, the provided test data set is exactly the same one I used in the paper.

With a RTX 2080Ti you should totally be able to use the same parameters as used for the lab system in the paper.
The crash you described puzzles me and I'm not sure what caused it.
One thing we could test is to increase the buffer size just slightly to see if processing speed increases or if the application still crashes: 1664 samples per raw A-scan, 512 A-scans per B-scan, 8 B-scans per buffer, 32 Buffers per volume, 32 Buffers to read from file
Could you retry it with these parameters, please?

@jdavidli sorry, the parameters in my last message are incorrect. They actually reduce the buffer size!
Here the correct parameters: 1664 samples per raw A-scan, 512 A-scans per B-scan, 32 B-scans per buffer, 8 Buffers per volume, 8 Buffers to read from file
With those parameters the buffer size should be 52 MB

@jdavidli sorry, the parameters in my last message are incorrect. They actually reduce the buffer size!
Here the correct parameters: 1664 samples per raw A-scan, 512 A-scans per B-scan, 32 B-scans per buffer, 8 Buffers per volume, 8 Buffers to read from file
With those parameters the buffer size should be 52 MB

@spectralcode with your new suggested parameters, I get the following:
Volumes/second: 7.5
Buffers/second: 60
B-scans/second: 1920
A-scans/second: 9.8e5
Buffer size: 52
Data throughput: 3120

The volume rate is still significantly lower than what I would expect compared to your benchmarks in the paper. The A-scans/second also seems really low and I suspect might be might affecting the volume rate? Furthermore, with these settings, it seems like the test data set is no longer playing correctly. The B-scan does not change at all (unless I have it display another location within the volume).

I think the additional information you provided about parameter settings in your other comment is useful and would be good to include in either the paper or the manual. The labels in the system settings also differ slightly from the paper (e.g. bit depth vs bits per sample, B-scans per buffer and buffers per volume vs frames per volume). It would probably be a little bit more user-friendly if the language used was more consistent.

The included developer guide still feels a bit light. The examples you provided of extension plug-ins are pretty good. But looking through the source code of the virtual OCT system example for acquisition plug-ins, I feel like it would still be a bit challenging for others to use this software with their own systems and contribute to the project. If you have an example of an implementation with a physical OCT system rather than a virtual one, I think that would be incredibly useful to include as an example acquisition plug-in.

@jdavidli thank you very much, this is great that you was able to change the parameters and perform processing without crash. This tells me at least that there is not a major bug in the software that I have missed and that performance on your system can be increased by adjusting the Virtual OCT System parameters.

The documentation now contains detailed information about processing performance. Please let me know if this is helps you to replicate my measurements.

The B-scan does not change at all

โ†’ Yes, this is expected when _Buffers per volume_ and _Buffers to read from file_ have the same value. If you want the live view to change while using Virtual OCT System, double the value for _Buffers to read from file_.

The labels in the system settings also differ slightly from the paper

โ†’ Thank you very much for pointing this out! I changed it!

If you have an example of an implementation with a physical OCT system rather than a virtual one, I think that would be incredibly useful to include as an example acquisition plug-in.

โ†’ Yes, I totally agree with you! Unfortunately I cannot provide an implementation for actual OCT hardware at the moment.

@whedon generate pdf

Hi all! I am the rotating associate editor in chief for the week and want to check in here since it's been awhile since there has been activity on this review. It looks like we may have a complete review from @brandondube, no review from @jrasakanthan, and partial review from @jdavidli. I think we can take a step forward if @brandondube can verify if that review is finished, get a review started from @jrasakanthan, and see what the next step is for @jdavidli. Thanks all!

Yes, my review is complete. LGTM!

@kthyng I do not have access to WinOS (only Unix) so I cannot review this SW.

@jrasakanthan ah ok, thanks then.

@arfon want to either get a new reviewer or proceed with two instead of three?

I'm happy to proceed with two (sorry it didn't work out this time @jrasakanthan)

@jdavidli - how are you getting on completing your review?

@whedon remove @jrasakanthan as reviewer

OK, @jrasakanthan is no longer a reviewer

@kthyng @arfon There were just a few more minor things I wanted to address, but I should be done pretty soon.

@spectralcode I really like the new page you have on performance. It's super helpful and I was able to replicate your performance. Same hardware as previously mentioned.
Volumes/second: 38
Buffers/second: 38
B-scans/second: 9840
A-scans/second: 5.038e6
Buffer size: 256
Data throughput: 9840

I just have a few minor suggestions for the paper itself. In the third line of the introduction, it should say "by combining a reference beam." In section two, towards the end, where you talk about physical or virtual OCT systems, I think it would be useful to add a sentence explaining what you mean by virtual OCT systems. This also relates to my last concern. At the beginning of section three, you say raw data from the OCT system is transferred to RAM. I might've missed this, but how would is that simulated in a virtual system? In a physical OCT system, you would need to acquire data using some kind of acquisition card like you mentioned in section five or the processing page, and then transfer to RAM.

@whedon generate pdf

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

@jdavidli thank you very much! Your performance test results look great!

In the third line of the introduction, it should say "by combining a reference beam."

โ†’ Thanks! I have changed it.

In section two, towards the end, where you talk about physical or virtual OCT systems, I think it would be useful to add a sentence explaining what you mean by virtual OCT systems.

โ†’ Thank you very much, you are right about this! In order not to make the paper any longer (the paper length is already slightly above the required length of 250 - 1000 words), I rephrased the sentence so that the word "virtual" is no longer used. I hope this avoids any confusion regarding the virtual OCT system as it is not mentioned anymore. Please let me know what you think.

raw data from the OCT system is transferred to RAM. I might've missed this, but how would is that simulated in a virtual system?

โ†’ The plugin _Virtual OCT System_ reads raw data from the hard disk and transfers it to RAM. In an actual OCT system there would be some kind of acquisistion card (just like you mentioned) that does not need to store the acquired data on the hard disk. The data can be transferred directly to the RAM.

@spectralcode - thanks for the updates.

@jdavidli - with these latest changes, are you ready to check off the remaining items in your reviewer checklist above?

@spectralcode thanks for making those changes!
@arfon yes this concludes my review.

@whedon generate pdf

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

@spectralcode - could you please merge this PR which implements a few small fixes to your paper: https://github.com/spectralcode/OCTproZ/pull/5

After this could you make a new release of this software that includes the changes that have resulted from this review. Then, please make an archive of the software in Zenodo/figshare/other service and update this thread with the DOI of the archive? For the Zenodo/figshare archive, please make sure that:

  • The title of the archive is the same as the JOSS paper title
  • That the authors of the archive are the same as the JOSS paper authors

I can then move forward with accepting the submission.

@whedon generate pdf

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

@arfon Thank you for the fixes. I merged your pull request!
Here is the DOI of the Zenodo archive: https://doi.org/10.5281/zenodo.4148992

The version number of the most recent release is v1.2.1

Please note: I have made some minor changes to the paper. One change that is noticable is figure 2, which I redesigned to make the text in the figure easier to read. I hope this is fine for everybody, please let me know if this is not the case.

@whedon set 10.5281/zenodo.4148992 as archive

OK. 10.5281/zenodo.4148992 is the archive.

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.1364/boe.3.003067 is OK
- 10.1364/oe.18.011772 is OK
- 10.1117/1.3548153 is OK
- 10.1117/1.JBO.17.10.100502 is OK
- 10.1109/fccm.2011.27 is OK
- 10.1117/1.JBO.18.2.026002 is OK
- 10.1364/OE.18.024395 is OK
- 10.1364/BOE.5.002963 is OK

MISSING DOIs

- None

INVALID DOIs

- None

:wave: @openjournals/joss-eics, this paper is ready to be accepted and published.

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1876

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1876, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

๐Ÿฆ๐Ÿฆ๐Ÿฆ ๐Ÿ‘‰ Tweet for this paper ๐Ÿ‘ˆ ๐Ÿฆ๐Ÿฆ๐Ÿฆ

๐Ÿšจ๐Ÿšจ๐Ÿšจ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! ๐Ÿšจ๐Ÿšจ๐Ÿšจ

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1877
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.02580
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! ๐ŸŽ‰๐ŸŒˆ๐Ÿฆ„๐Ÿ’ƒ๐Ÿ‘ป๐Ÿค˜

    Any issues? Notify your editorial technical team...

@jdavidli, @brandondube - many thanks for your reviews here โœจ

@spectralcode - your paper is now accepted into JOSS :zap::rocket::boom:

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.02580/status.svg)](https://doi.org/10.21105/joss.02580)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.02580">
  <img src="https://joss.theoj.org/papers/10.21105/joss.02580/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.02580/status.svg
   :target: https://doi.org/10.21105/joss.02580

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@brandondube , @jdavidli many thanks for your reviews and to @kthyng and @arfon for editing this submission!

Was this page helpful?
0 / 5 - 0 ratings