Joss-reviews: [REVIEW]: FEST-3D: Finite-volume Explicit STructured 3-Dimensional solver

Created on 9 Jul 2019  Β·  70Comments  Β·  Source: openjournals/joss-reviews

Submitting author: @jayten (Jatinder Pal Singh Sandhu)
Repository: https://github.com/FEST3D/FEST-3D
Version: v1.1
Editor: @labarba
Reviewers: @hariradh, @rouson
Archive: 10.5281/zenodo.3660408

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/3320d5c213241912451ed84a3b3672b7"><img src="http://joss.theoj.org/papers/3320d5c213241912451ed84a3b3672b7/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/3320d5c213241912451ed84a3b3672b7/status.svg)](http://joss.theoj.org/papers/3320d5c213241912451ed84a3b3672b7)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@hariradh, @rouson please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @labarba know.

✨ Please try and complete your review in the next two weeks ✨

Review checklist for @hariradh

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: v1.1
  • [x] Authorship: Has the submitting author (@jayten) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @rouson

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: v1.1
  • [x] Authorship: Has the submitting author (@jayten) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

@rouson --- Thank you very much for the detailed recommendations for improvements that you have provided to the authors.

Note, though, that the JOSS review process seldom leads to a _rejection_, in the manner of traditional journals. The Review Guidelines, in the section on What happens if the software I’m reviewing doesn’t meet the JOSS criteria? says:

We ask that reviewers grade submissions in one of three categories: 1) Accept 2) Minor Revisions 3) Major Revisions. Unlike some journals we do not reject outright submissions requiring major revisions - we’re more than happy to give the author as long as they need to make these modifications/improvements.

In other words, we aim to help authors improve their software _until it passes the bar_ to be accepted. They can, of course, decide that they are not willing to do the improvements, and request to withdraw the submission.

In this particular case, your critique of the software will need to be assessed by the authors, and they are allowed to come back to us with a proposal of what / how much they want to implement of your suggestions. The authors are not required to use particular languages or language features. JOSS reviewers/editors may insist for updates in the case of languages no longer supported (or soon to be), like Python 2.x. That is not the case with old Fortran, which is very much still used and supported.

It is the case, however, that using old languages may make the software less likely to be reused, less likely for others to contribute to it, and less likely to be cited. This is not grounds for rejection, but should be of keen interest to the authors.

I encourage the authors to assess the recommendations, and the improvements they would like to make. For example, this critique:

β€œhundreds of lines of code blocks comprised of loops nested 4 levels deep for no purpose other than computing a sum that could presumably be computed in a single line of code by invoking the sum function.”

... it sounds like a sizable reduction in number of lines of code is possible with some modernizing. Yet, the authors could have concerns with compilers available in their machines supporting the modern features.

@jayten β€” Please discuss with your team, and get back to us with your assessment.

All 70 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @hariradh it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
Attempting PDF compilation. Reticulating splines etc...

@whedon add @rouson as reviewer

OK, @rouson is now a reviewer

πŸ‘‹ @hariradh, @rouson β€” This is where the action happens. You each have a review checklist above to work through. As you do, feel free to post your reviewer comments to the author in this thread. You may also open issues in the software repository as needed, and post a link here. Ask me anything!

Reviewer Guidelines on our dedicated documentation site πŸš€

I have trouble compiling the code based on the instructions. The cmake step hangs.

$ FC=mpif90 cmake ..
-- The Fortran compiler identification is GNU 9.1.0
-- Checking whether Fortran compiler has -isysroot
-- Checking whether Fortran compiler has -isysroot - yes
-- Checking whether Fortran compiler supports OSX deployment target flag
-- Checking whether Fortran compiler supports OSX deployment target flag - yes
-- Check for working Fortran compiler: /usr/local/bin/mpif90
-- Check for working Fortran compiler: /usr/local/bin/mpif90  -- works
-- Detecting Fortran compiler ABI info
-- Detecting Fortran compiler ABI info - done
-- Checking whether /usr/local/bin/mpif90 supports Fortran 90
-- Checking whether /usr/local/bin/mpif90 supports Fortran 90 -- yes
-- CMAKE_BUILD_TYPE not given, defaulting to RELEASE

I am testing the code using:

  • Cmake 3.15
  • gcc 9.1.0
  • MPICH2 3.3.1
  • OSX 10.14.5

Cmake, gcc, and MPICH2 were installed using homebrew.

The compilation problem on MacOSX has been sorted out. The following piece of code in the CMake module "SetFortranFlag.cmake" was creating a problem.

# Don't add underscores in symbols for C-compatibility
SET_COMPILE_FLAG(CMAKE_Fortran_FLAGS "${CMAKE_Fortran_FLAGS}"
                 Fortran "-fno-underscoring")

So, the issue was corrected on Fixed the Compatibility issue of MacOSX for CMake script branch and merged with master.

The latest state of the code on master branch has been test on OSX version 10.13.4 using following:

  • CMake-3.15.0
  • Gcc-9.1.0
  • Libmpc-1.1.0
  • Mpfr-4.0.2
  • Isl-0.21
  • Gmp-6.1.2_2
  • mpich-3.3.1

Compilation was done using
$ FC=mpif90 cmake ..
and then
$ make -j 4

πŸ‘‹ @hariradh, @rouson β€” how are you getting on with this review? I see the author has made some fixes, but no more comments in the last 3 weeks. Can we have a status update and an estimate of the time needed to complete the reviews?

@labarba @jayten

I apologize for the delay but I am having trouble testing the functionality of the code. The code fails when running the automaton.py script in the CheckInput function. I was trying to figure out the cause before sending my comments to the authors but have not been able to complete the task due to other priorities. Here are a couple of examples:

ginger:run hari$ python automaton.py
Traceback (most recent call last):
  File "automaton.py", line 347, in <module>
    CheckInput(ExpectedControl, ExpectedScheme, ExpectedFlow, ExpectedOutputControl, ExpectedResidualControl, Control, Scheme, Flow, OutputControl, ResidualControl)
  File "automaton.py", line 187, in CheckInput
    assert all(variable in ExpectedOutputControl['In'] for variable in OutputControl['In'])
AssertionError
ginger:LidDrivenCavity hari$ python automaton.py
Traceback (most recent call last):
  File "automaton.py", line 340, in <module>
    CheckInput(ExpectedControl, ExpectedScheme, ExpectedFlow, ExpectedOutputControl, ExpectedResidualControl, Control, Scheme, Flow, OutputControl, ResidualControl)
  File "automaton.py", line 183, in CheckInput
    assert len(next(os.walk(GridDir))[2]) == NumberOfBlocks
StopIteration

First error:

The automaton.py in the run directory is a sample script file. This script file is not supposed to be executed as the inputs are randomly provided as an explanation for the variables. Users are supposed to copy this script and modify as according to need. In the Tutorial subfolder, different automation.py file is provided with specific inputs.

Second error:

The assertion error indicates that the number of blocks provided as input in the file (NumberOfBlocks) does not match with the number of grid files available in the GridDir. So, grid files are not created before calling setup script automaton.py. This assertion helps to avoid the breakdown of the solver after it is executed. Each block in the solver requires an individual grid file.

As described in the documentation, the grid files for Lid-driven cavity test case can be created easily by just switching to the CreatedBlock subfolder and executing make command. An important point to note here is that the grid generation requires numpy version 1.10 or above for Python2.7. Once the grids the generated, the error should not occur.

Try following steps in the LidDrivenCavity folder:

$ cd CreateBlocks/
$ make

An error might occur if "numpy" version is less than 1.10 as the function "stack" is not implemented in the older versions.

$ cd ../
$ python automaton.py

Another commonly occurring mistake is not defining the path of the AbsBinaryPath. All the other input in the automaton.py will work without changing, but AbsBinaryPath is the absolute path to the directory where FEST-3D binary is compiled on your local computer.

In the automaton.py file make sure you change following:
AbsBinaryPath="/Path/to/FEST-3D/bin/FEST3D"

More details can be found in the Documentation

@rouson thanks again for helping us review this JOSS submission. Can you please update us on your progress with this review? If you need more time please inform us as well. Thanks!

@hariradh do the comments by @jayten help you trouble shoot those errors?

I cannot find any unit tests or API documentation. @jayten, could you please let me know if there are any.

Other than that, there should be at least one case like the LDC which needs more clearer documentation (like that provided above) to explain how to run a test case for validation.

@hariradh
I will get back on your query in a few days. We are working on it.

πŸ‘‹ @rouson β€” Do you think you can complete this review within the next week or two?

@hariradh

API Documentation

The FEST-3D website provides detailed documentation of different source files, modules, procedures, and main program. This documentation is generated using FORD documentation generator. You can follow the links to find information on all the subroutines and functions written in different modules.

Unit Tests

It is rather difficult to write unit tests for the FEST-3D code; therefore, we have written integrated tests instead. FEST-3D repository has been updated with a new folder: test/. Please pull the changes in the master branch on GitHub. A python script, "Test.py," provided in the test/ directory to run the integrated tests with a particular flux scheme, higher-order method, and turbulence model. Use the following command to run all tests in the test/ directory. After generating the binary, you can run the test script using the following command in the test/ folder:

$cd <FEST-3D/root/folder>/test/
$python Test.py <arg1>  <arg2>  <arg3>
  1. : Flux scheme

    • allowed options:

      • ausm



        • ldfss0


        • ausmUP


        • ausmP


        • slau



  2. : Higher-order method

    • allowed options:

      • muscl

      • ppm

      • weno

  3. : Turbulence model

    • allowed options:

      • sst

      • sst2003

      • kkl

      • sa

Examples:

  1. $python Test.py ausm muscl sst
  2. $python Test.py slau weno sa
  3. $python Test.py ausmUP ppm kkl

Please note that any flux-reconstruction scheme/higher-order method/turbulence model added by a developer can also be tested using Test.py script by suitably modifying the arguments passed to the script.

Once the tests are complete, you can check the summary in the Report.txt file. For more details about the integrated tests, please go through the TEST webpage in the FEST-3D documentation.

How to run FEST-3D

Also, as requested by @hariradh, we have updated the website with a new page on "How to run FEST-3D." This page provides a step-by-step approach on how to run the Lid-driven cavity test case.

@labarba Yes, I believe I can complete my review within a week of today.

@labarba @Kevin-Mattheus-Moerman I'm attempting to complete my review but finding that I'm unable to check any of the checkboxes on my checklist. Is this this submission still under review?

You mean, while logged in to GitHub, you click on the box, and nothing happens? It could be a permissions issue. Though you should have been automatically added to the right team when added as a reviewer. @arfon : could you check this?

@rouson - you need to accept this invite: https://github.com/openjournals/joss-reviews/invitations

@arfon - that did it. Thanks!

@whedon accept

I'm sorry @hariradh, I'm afraid I can't do that. That's something only editors are allowed to do.

@labarba I recommend that with the latest changes and documentation provided by @jayten, the paper be accepted.

πŸ‘‹ @rouson β€” what is your ETA for the review? (we already have a recommendation from the other reviewer). Thanks!

πŸ‘‹ @rouson - checking with you on your review again...

I have sent @rouson an email now.

@labarba thanks for the email. I somehow got subscribed to every JOSS communication so these messages got buried in the resulting flood. Hopefully I just resolved it by clicking "Unwatch." I'll work on my review later today and post the results.

πŸ‘‹ @rouson β€” how are you getting on with this review? Please give us an update.

@labarba I have added two issues that need to be addressed in order for me to complete my review. In addition, I am concerned about the appropriateness of releasing new software based on a language version that is very nearly three decades old (Fortran 90) and using a parallel programming model that is over two decades old (MPI-1). The Fortran standard has undergone 4 revisions since Fortran 90, each of which has added features that could be used to make the authors' code considerably more clear and concise and could potentially have benefits in robustness and performance. MPI has undergone two revisions since MPI-1 and more importantly, the most recent Fortran standards provide a sufficiently rich set of parallel features that it's not even clear there is a need for embedding MPI directly in _new_ projects. The compiler and/or parallel runtime library can generate the MPI and would likely generate more sophisticated MPI (or an alternative parallel programming model) than scientific application developers would likely write so again there's the possibility for greater clarity, conciseness, robustness, and performance.

@labarba @danielskatz I just added [another issue] on this submissions repository with an 10-item list of modernization suggestions. Despite the list's length, it is surely not exhaustive as I've only reviewed a few files. I'd like your guidance on whether archaic use of the language is a relevant review criterion. I can't see where it fits into any of the items in my reviewer checklist, but after writing this latest issue, I would recommend rejection unless you're of the opinion that my concern is not part of the allowable criteria. I feel strongly that it's time for the world to move on from coding to a language standard that is very nearly 3 decades old. The code could definitely be more concise and robust and might even be higher performing if it were extensively overhauled and modernized to exploit the Fortran 2018 features that are supported by the compiler recommended in the installation instructions (gfortran). In a much more modern form, the code could be a great contribution to the literature.

I just added yet another [issue] as well as a lengthy comment on a previous issue. The more I look at the code, the more confident I am in recommending rejection. I'm certain this code addresses an important problem and is therefore useful, but it really needs a major revision to come up to the standards of what I would expect for any modern open-source package.

@rouson --- Thank you very much for the detailed recommendations for improvements that you have provided to the authors.

Note, though, that the JOSS review process seldom leads to a _rejection_, in the manner of traditional journals. The Review Guidelines, in the section on What happens if the software I’m reviewing doesn’t meet the JOSS criteria? says:

We ask that reviewers grade submissions in one of three categories: 1) Accept 2) Minor Revisions 3) Major Revisions. Unlike some journals we do not reject outright submissions requiring major revisions - we’re more than happy to give the author as long as they need to make these modifications/improvements.

In other words, we aim to help authors improve their software _until it passes the bar_ to be accepted. They can, of course, decide that they are not willing to do the improvements, and request to withdraw the submission.

In this particular case, your critique of the software will need to be assessed by the authors, and they are allowed to come back to us with a proposal of what / how much they want to implement of your suggestions. The authors are not required to use particular languages or language features. JOSS reviewers/editors may insist for updates in the case of languages no longer supported (or soon to be), like Python 2.x. That is not the case with old Fortran, which is very much still used and supported.

It is the case, however, that using old languages may make the software less likely to be reused, less likely for others to contribute to it, and less likely to be cited. This is not grounds for rejection, but should be of keen interest to the authors.

I encourage the authors to assess the recommendations, and the improvements they would like to make. For example, this critique:

β€œhundreds of lines of code blocks comprised of loops nested 4 levels deep for no purpose other than computing a sum that could presumably be computed in a single line of code by invoking the sum function.”

... it sounds like a sizable reduction in number of lines of code is possible with some modernizing. Yet, the authors could have concerns with compilers available in their machines supporting the modern features.

@jayten β€” Please discuss with your team, and get back to us with your assessment.

@labarba I love that and fully support everything you wrote. JOSS is a breath of fresh air!

We are thankful to the reviewer @rouson for his insightful comments. Our team is not familiar with the latest Fortran version, but based on the reviewer's suggestion, it seems like the right way forward to use newer features supported by the latest Fortran version. However, at the same time, we want to make sure that It does not create any conflict with our already planned work - implementation of OpenACC directive based parallelisation of loops. As Gfortran9 only provides experimental and incomplete support for OpenACC, the new members of our team have chosen the PGI compiler to write directive-based GPU programming. As such, Coarray seems to be very simple and productive to use, the PGI compiler appears to be holding back in supporting Coarray as they are still only FORTRAN 2003 compliant. We are also not sure if array operation is compatible with OpenACC as it seems from our preliminary reading that OpenACC directives recognise the loops (which are to be run parallel in GPU) from the "do" and "end do" statements. We are still new to this field (use of OpenACC directives) and will be happy to be corrected if we are wrong.

Since most of the other suggestions by the reviewer are doable without any conflict with our future work, we propose to do the following work for now:

  1. Reducing the length of the code wherever possible (for eg: a single subroutine for all three directions of the 3D grid, use of derived datatype, etc. )
  2. Making variables local to the module.
  3. Instead of using just global variables, passing arguments to subroutines.
  4. Write unit tests with ctest.
  5. Remove unnecessary comments and files from the code.

A point to note: the portion of code that the reviewer suggested could be replaced with just a sum statement is actually a buffer collection operation, wherein it is then passed across nodes using MPICH MPISend and MPIRecv commands. We realise though that with the use of Coarrays this will be just a few lines of code.

πŸ‘‹ @jayten β€” Do let us know what your timeline looks like for the work you have proposed to do. Best wishes for the New Year! πŸŽ†

@labarba
The proposed work has been completed. The changes have been tested and merged with the master branch. Please review the changes here.

The summary of the changes made is:

  1. Added install bash script for automatic installation of FEST-3D with dependencies on Ubuntu, CentOS, and MacOS platforms. This script will install dependencies, build executable, run unit, and integrated test.
  2. Added Travis CI.
  3. Since Python2 will no longer be maintained past 2020, Python script has been moved from version 2 to version 3
  4. Clustering similar variables using derived data-types, which helps in making the argument list of various subroutines shorter.
  5. Instead of using global variables, most variables are passed as subroutine arguments.
  6. Updated CMake to automatically include for MPICH library, which removes the need to specify Fortran compiler while executing CMake file.
  7. Integrated Ctest for Fortran with CMake to avoid additional dependencies for unit test setup.
  8. Added 25 unit tests.
  9. Removed commented code.
  10. Deleted unnecessary archived/unused files.
  11. Update documentation with the above changes.

Additionally, the install script also sets the "FEST3D" environment variable which will avoid the error faced by both reviewers while setting up the test cases using Python script in the "run/" directory.

πŸ‘‹ @hariradh, @rouson β€” The authors of this submission have made substantial improvements, and we ask that you take a look a this work and advice on the state of the software now. If you are ready to recommend this for publication, please let us know. Thank you!

@labarba This is a very impressive step forward. I recommend acceptance. I just installed FEST-3D on macOS and had a much better experience with the process this time. After I issued the one simple installation command provided in the repository's README.md, the installation completed without problem, including running and passing all 25 unit tests and all 3 integration tests. Ease of installation greatly improves the user experience and passing tests greatly improve user confidence.

I also looked at several source files and believe the code at least makes better use of Fortran 90. I encourage the authors to explore and exploit the numerous major advances made in more recent Fortran standards, including support for object-oriented programming, parallel programming, and functional programming. I suspect the code could still be made quite a bit less verbose through the use of array statements and more robust by either eliminating pointers or incorporating reference counting. At a minimum, all pointers should be default-initialized to null() so that no pointer starts life in an undefined state. I hope to find the time to make more suggestions via pull request.

@rouson β€” Are you ready to tick off the remaining items in your review checklist?

@hariradh β€” I see all your checklist items are ticked off. Can we have a confirmation here that you recommend acceptance?

I confirm that I recommend acceptance.

On 3 Feb 2020, at 3:01 PM, Lorena A. Barba notifications@github.com wrote:

@hariradh https://github.com/hariradh β€” I see all your checklist items are ticked off. Can we have a confirmation here that you recommend acceptance?

β€”
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/openjournals/joss-reviews/issues/1555?email_source=notifications&email_token=AAUO52BFYRATVA7H6M3IZADRBAIT3A5CNFSM4H7EZ65KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKTYC3A#issuecomment-581402988, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAUO52DCBXQJKJVY7GDMVWLRBAIT3ANCNFSM4H7EZ65A.

@labarba I just checked off all remaining boxes on my reviewer checklist.

@whedon generate pdf

Please merge https://github.com/FEST3D/FEST-3D/pull/11 which provides small typo fixes.

Then, you need to do these final steps:

  • [x] Please make a tagged release of your software, and list the version tag of the archived version here.
  • [x] Archive the reviewed software in Zenodo (or a similar service)
  • [x] Check the Zenodo deposit has the correct metadata, this includes the title (should match the paper title) and author list (make sure the list is correct and people who only made a small fix are not on it); you may also add the authors' ORCID.
  • [x] Please list the Zenodo DOI of the archived version here.

You may find this helpful: https://guides.github.com/activities/citable-code/

I have released the reviewed FEST-3D software as a new version with tag v1.1 and archived the same on Zenodo with DOI: 10.5281/zenodo.3660408 .

@whedon set v1.1 as version

OK. v1.1 is the version.

@whedon set 10.5281/zenodo.3660408 as archive

OK. 10.5281/zenodo.3660408 is the archive.

@whedon accept

Attempting dry run of processing paper acceptance...

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1287

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1287, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

Reference check summary:

OK DOIs

- None

MISSING DOIs

- https://doi.org/10.2514/6.2018-3859 may be missing for title: A convergence study of solutions using two two-equation RANS turbulence models on a finite volume solver for structured grids
- https://doi.org/10.2514/6.1999-936 may be missing for title: A fast, matrix-free implicit method for compressible flows on unstructured grids
- https://doi.org/10.2514/3.12149 may be missing for title: Two-equation eddy-viscosity turbulence models for engineering applications
- https://doi.org/10.2514/6.2016-3941 may be missing for title: Verification and Validation of the k-kL Turbulence Model in FUN3D and CFL3D Codes
- https://doi.org/10.2514/6.1992-439 may be missing for title: A one-equation turbulence model for aerodynamic flows
- https://doi.org/10.1080/1061856031000104851 may be missing for title: High-order Finite Difference and Finite Volume WENO Schemes and Discontinuous Galerkin Methods for CFD
- https://doi.org/10.1016/0021-9991(84)90143-8 may be missing for title: The piecewise parabolic method (PPM) for gas-dynamical simulations
- https://doi.org/10.1243/jmes_jour_1980_022_043_02 may be missing for title: Natural Transition of Boundary Layersβ€”The Effects of Turbulence, Pressure Gradient, and Flow History
- https://doi.org/10.1016/b978-0-444-89802-9.50059-9 may be missing for title: Further progress in the turbulence modelling of by-pass transition
- https://doi.org/10.1007/978-94-015-8666-5_6 may be missing for title: One point closure applied to transition
- https://doi.org/10.2514/6.2001-446 may be missing for title: Predictions of separated and transitional boundary layers under low-pressure turbine airfoil conditions using an intermittency transport equation
- https://doi.org/10.4208/cicp.041109.160910a may be missing for title: Performance of Low-Dissipation Euler Fluxes and Preconditioned LU-SGS at Low Speeds
- https://doi.org/10.31274/etd-180810-4424 may be missing for title: A Bypass Transition Model Based on the Intermittency Function
- https://doi.org/10.1016/j.jcp.2005.09.020 may be missing for title: A sequel to AUSM, Part II: AUSM+-up for all speeds
- https://doi.org/10.2514/1.j050905 may be missing for title: Parameter-Free Simple Low-Dissipation AUSM-Family Scheme for All Speeds
- https://doi.org/10.1016/b978-008044114-6/50053-3 may be missing for title: Transition Modelling Based on Local Variables
- https://doi.org/10.2514/6.2005-522 may be missing for title: Transition modeling for general CFD applications in aeronautics
- https://doi.org/10.1016/b978-008044544-1/50003-0 may be missing for title: Transition Modelling for General Purpose CFD Codes
- https://doi.org/10.2514/1.42362 may be missing for title: Correlation-based transition modeling for unstructured parallelized computational fluid dynamics codes
- https://doi.org/10.2514/6.2013-2531 may be missing for title: A Novel Intermittency Distribution Based Transition Model For Low-Re Number Airfoils
- https://doi.org/10.1115/1.2929110 may be missing for title: The 1991 IGTI Scholar Lecture: The Role of Laminar-Turbulent Transition in Gas Turbine Engines
- https://doi.org/10.1016/j.ijheatfluidflow.2008.09.004 may be missing for title: Correlations for modeling transitional boundary layers under influences of freestream turbulence and pressure gradient
- https://doi.org/10.2514/6.1998-867 may be missing for title: Hypersonic boundary layer transition for X-33 phase II vehicle
- https://doi.org/10.1016/0021-9991(78)90023-2 may be missing for title: A survey of several finite difference methods for systems of nonlinear hyperbolic conservation laws
- https://doi.org/10.1016/0017-9310(73)90125-7 may be missing for title: The calculation of low-Reynolds-number phenomena with a two-equation model of turbulence
- https://doi.org/10.2514/3.59994 may be missing for title: Simulation of Transition with a Two-Equation Turbulence Model
- https://doi.org/10.1115/imece2002-32740 may be missing for title: A New Model for Boundary-Layer Transition Using a Single-Point RANS Approach
- https://doi.org/10.1017/cbo9780511755385.018 may be missing for title: DNS of bypass transition
- https://doi.org/10.2514/3.1784 may be missing for title: BOUNDARY LAYER TRANSITION- FREESTREAM TURBULENCE AND PRESSURE GRADIENT EFFECTS
- https://doi.org/10.2514/6.2012-672 may be missing for title: One-Equation Transition Closure for Eddy-Viscosity Turbulence Models in CFD
- https://doi.org/10.1016/0021-9991(82)90058-4 may be missing for title: High-Re solutions for incompressible flow using the Navier-Stokes equations and a multigrid method
- https://doi.org/10.4271/975502 may be missing for title: Transition closure model for predicting transition onset
- https://doi.org/10.1063/1.168744 may be missing for title: A tensorial approach to computational continuum mechanics using object-oriented techniques
- https://doi.org/10.2514/1.j053813 may be missing for title: SU2: An open-source suite for multiphysics simulation and design

INVALID DOIs

- None

πŸ‘‹ @jayten β€” Please go over your reference list and add DOIs where available. Use the doi field and put just the identifier, not the full URL. Also, check the capitalization of journal titles, and acronyms like AIAA (appears as Aiaa in places). If you need to protect caps, use{}.

I have added doi field for each bib entry and also removed the unused entries from the paper.bib file. Additionally, I included {} in the booktitle field to make sure that AIAA acroynms appears correctly.

@whedon accept

Attempting dry run of processing paper acceptance...
Reference check summary:

OK DOIs

- 10.2514/6.2018-3859 is OK
- 10.2514/3.12149 is OK
- 10.2514/6.2016-3941 is OK
- 10.2514/6.1992-439 is OK
- 10.1080/1061856031000104851 is OK
- 10.1016/0021-9991(84)90143-8 is OK
- 10.1016/0021-9991(79)90145-1 is OK
- 10.1007/s10494-015-9622-4 is OK
- 10.1177/0954406217743537 is OK
- 10.1063/1.168744 is OK
- 10.2514/1.J053813 is OK

MISSING DOIs

- None

INVALID DOIs

- None

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1288

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1288, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

🐦🐦🐦 πŸ‘‰ Tweet for this paper πŸ‘ˆ 🐦🐦🐦

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1289
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01555
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! πŸŽ‰πŸŒˆπŸ¦„πŸ’ƒπŸ‘»πŸ€˜

    Any issues? notify your editorial technical team...

Congratulations, @jayten, your JOSS article is published! πŸš€

Huge thanks to our reviewers: @hariradh, @rouson β€” we truly appreciate your contribution to this adventure in scholarly publishing πŸ™

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01555/status.svg)](https://doi.org/10.21105/joss.01555)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01555">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01555/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01555/status.svg
   :target: https://doi.org/10.21105/joss.01555

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

On behalf of the FEST-3D team, I would like to thank @hariradh and @rouson for their time and effort to help us improve our work.

Also, a big thanks to @whedon and @labarba, for providing such a unique platform for publishing our work.

Congrats, @jayten

Was this page helpful?
0 / 5 - 0 ratings