Joss-reviews: [REVIEW]: Agents.jl: agent-based modeling framework in Julia

Created on 30 Jul 2019  Β·  72Comments  Β·  Source: openjournals/joss-reviews

Submitting author: @kavir1698 (Ali Rezaee Vahdati)
Repository: https://github.com/kavir1698/Agents.jl
Version: v1.1.8
Editor: @jedbrown
Reviewers: @Datseris, @mozhgan-kch
Archive: 10.5281/zenodo.3477581

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/11ec21a6bb0a6e9992c07f26a601d580"><img src="http://joss.theoj.org/papers/11ec21a6bb0a6e9992c07f26a601d580/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/11ec21a6bb0a6e9992c07f26a601d580/status.svg)](http://joss.theoj.org/papers/11ec21a6bb0a6e9992c07f26a601d580)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@Datseris and @mozhgan-kch, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @jedbrown know.

✨ Please try and complete your review in the next two weeks ✨

Review checklist for @Datseris

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: v1.1.8
  • [x] Authorship: Has the submitting author (@kavir1698) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?

Review checklist for @mozhgan-kch

Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: v1.1.8
  • [x] Authorship: Has the submitting author (@kavir1698) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

I just regenerated the PDF locally and updated the joss-papers repo with the new PDF.

This is showing up as fixed for me now but might take a few hours to show up as modified for some of you as there's caching in place for the PDFs.

All 72 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @Datseris it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...

@Datseris :wave: Welcome, and thanks for agreeing to review! The comments from @whedon above outline the review process, which takes place in this thread (possibly with issues filed in the Agents.jl repository). I'll be watching this thread if you have any questions.

Below is my review comments. Items beginning with [OPTIONAL] are up to the author to tackle and I am happy to accept publication even without them implemented. For the rest, I require them to be finalized for me to accept publication.

(this list is updated as the review progresses)

  • [x] The currently installed version for the package manager is 1.1.0. This does not match the submitted version of 1.1.2. Regardless, I think a new version should be associated with the paper anyways, one that includes the bugfixes stemming from this review.
  • [x] A claim of performance is not made directly per se, but the author uses the known fact that Julia is faster than Python and provides a comparison graph in the paper. The caption of the graph then points to the docs. But in the (stable version of) the docs, I could not find more details on the performance graph. I think for transparency both the Julia and Python scripts that were used to produce the graph should be available in the documentation.
  • [x] [OPTIONAL] possible documentation improvements: https://github.com/kavir1698/Agents.jl/issues/16
  • [x] 2 out of 4 References do not have a DOI. (But I don't know if the actual papers have a DOI either)
  • [x] It seems that the paper lacks scientific introduction and motivation. I'd recommend to include the opening paragraph of the documentation (of the dev documentation) into the paper.
  • [x] Solve https://github.com/kavir1698/Agents.jl/issues/21
  • [x] Solve https://github.com/kavir1698/Agents.jl/issues/22

@jedbrown @kavir1698 I think I have now finished the first round of review. The above checklist states what I feel should be taken care of before acceptance.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Thank you, @Datseris, for your review.

@jedbrown I have released a new version (v1.1.7) for the package. I do not know how to change the version in the paper, though.

@whedon set v1.1.7 as version

OK. v1.1.7 is the version.

Please notice that for my review to conclude, a new version of the software should be released in the Julia package ecosystem. That will correspond to the version a user would obtain via normal installation, pkg> add Agents. Although this seems a technicality, it is nevertheless the way a Julia package is installed (unless one wants to mess with the master branch, where versions do not really matter).

I recommend @kavir1698 to leverage JuliaRegistrator as instructed here: https://github.com/kavir1698/Agents.jl/issues/25

Hi @Datseris, v1.1.7 is now released in the Julia package ecosystem.

Great! @jedbrown I conclude my review now. All points pass in my eyes.

Thanks, @Datseris!
@kavir1698, thanks for your patience, I'll add a second reviewer now.

@whedon add @mozhgan-kch as reviewer

OK, @mozhgan-kch is now a reviewer

@mozhgan-kch :wave: Welcome, and thanks for agreeing to review! The comments from @whedon at the top of this thread outline the review process, which takes place in this thread (possibly with issues filed in the Agents.jl repository). There is a review checklist for you above. I'll be watching this thread if you have any questions.

@jedbrown @mozhgan-kch this review has been a bit stale for >3 weeks. Can you give an estimate of when the review process will be completed?
@mozhgan-kch thanks for your help with this review. Please let us know when you feel you can finalize this review process? If you need more time please let us know as well. Thanks.

Hi @Kevin-Mattheus-Moerman @jedbrown, I would need extra time for this as I was away on leave and will be in only this week. I'll try to sort this out next week. Cheers.

Adding to the above, I would like to make below suggestions:

  • [x] It would be good to know if you have done any benchmarking to compare this with other frameworks such as Netlogo or Repast, since you mentioned these in your paper.
  • [x] It would be beneficial to read about the limitations of this tool/framework in the paper. If there is any.
  • [x] The last reference does not have a DOI.
  • [x] As part of the README.md , it would be good to have some note on how to contribute back to this code and repo or if anyone would like to request a feature, what would the procedure be.
  • [x] The motivation is not clear in the paper.
  • [x] There are various ways to study complex systems and one way is ABM. I suggest the author add a line or two on bottom-up vs top-down approach and why not ABM and not mathematical equations - see https://www.pnas.org/content/99/suppl_3/7280
  • [x] Lack of citations in _doc version 1.1.7_, section _Why we need agent-based modeling_.
  • [x] Please proofread the paper. (e.g: _This is can be important in large agent-based models_)

The main author was not tagged in any of the above posts. Maybe @kavir1698 was not notified of this progress?

@Datseris Thank you for your notification. I will start the revision process soon.

Dear @mozhgan-kch,

Thank you for your comments and review. Below are my responses to your comments.

  1. I have provided a comparison of speed between Agents.jl and Mesa, since they are the most similar to each other. The advantage of Agents.jl over Repast and NetLogo is that Agents.jl is implemented in Julia, which is an easy-to-learn language that is more geared toward data analysis and scientific computing.

  2. I have added a couple of sentences in the paper mentioning some limitations of Agents.jl: " In its current version, it does not provide tools to visualize simulations in real time. Moreover, a GUI would make the package even more accessible.".

  3. The paper does not seem to have a DOI. I did not find any.

  4. A note about contributions is in the CONTRIBUTING.md file. I have copied it at the end of the README.md.

  5. I add the following sentences to the paper:
    "Julia language provides a combination of features that were historically mutually exclusive. Specifically, languages that were fast to write, such as Python, were slow to run. And languages that were fast to run, such as C/C++, were slow to write. The combination of these two features, and the expressive structure of the language, makes Julia a desirable choice for scientific purposes. Agent-based models can involve hundreds of thousands of agents, each of which performing certain computations at each time-step. Thus, having a modeling framework that makes writing models easier and results in fast code is an advantage."

  6. I added the following paragraph:
    "ABM provides a bottom-up approach for studying complex systems, whereas analytical models have a top-down one [@Bonabeau2002]. The two approaches are complementary and they both can benefit from insights that the other approach contributes. Analytical models make many simplifying assumptions about the systems they study. This results in systems that are tractable and lead to clear conclusions. Agent-based models on the other hand, are more difficult to make sense of because they relax many assumptions of equation-based approaches. This is at the same time an advantage of agent-based models because it allows observing the effect of agent and environment heterogeneity and stochasticity, which can change a model's behavior [@Farmer2009]. ABM is specifically an important tool for studying complex systems where a system's behavior cannot be predicted and has to be explored."

  7. I have now cited four papers on the first line of the section.

  8. Thank you for mentioning the typo. I have checked the paper again.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@kavir1698 Thanks for fixing this. One more comment:

  • [x] Please check the article proof (see above) and fix the citation in paragraph 2.

Thank you, @mozhgan-kch. It should be fixed now.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

Great! Thanks @kavir1698.
@jedbrown, I am happy with the changes.

Thanks for your reviews, @mozhgan-kch and @Datseris.

@kavir1698 I posted a PR above with some copy edits to the paper. I have just a couple comments before proceeding:

  • The discussion of GUI features is awkward and I don't know what you intend. Is this suggesting that such a feature would be valuable future work? Could you please reword that bit?
  • The Features section is currently a mix of phrases (with or without periods) and full sentences. It should be reworded for consistency.

Thank you, @jedbrown. I have made changes to the paper.

Could you please downcase "cellular automata" (not a proper noun in existing publications), tag a release (annotated tag preferred), and archive your repository on Zenodo or similar. When finished, please report the DOI back here. Thanks.

v1.1.7 is already tagged. Do you mean to tag a new release?

Yes, we ask to archive the post-review software and it should be tagged. It could be v1.1.7.1 or v1.1.8 as you see fit.

Agents.jl v1.1.7 has all the changes that were suggested in the review process. Review number 2 did not require any changes to the code, but only to the paper. Shall I still make a new release?

If changes were contained to the paper, then archiving v1.1.7 is okay, but if there are changes to project source or documentation, please tag a new release.

Since there was a small change in the documentation, I tagged a new release. Here is the DOI from Zenodo: doi.org/10.5281/zenodo.3477581.

doi.org is being slow. Can you please update the author list to match the paper (i.e., just you).

@whedon set v1.1.8 as version

OK. v1.1.8 is the version.

@whedon set 10.5281/zenodo.3477581 as archive

OK. 10.5281/zenodo.3477581 is the archive.

The author list is updated.

@whedon accept

Attempting dry run of processing paper acceptance...

```Reference check summary:

OK DOIs

  • 10.1038/460685a is OK
  • 10.1016/j.ecolmodel.2006.04.023 is OK
  • 10.1177/0037549705058073 is OK
  • 10.25080/majora-7b98e3ed-009 is OK
  • 10.1186/2194-3206-1-3 is OK
  • 10.1073/pnas.082080899 is OK

MISSING DOIs

  • None

INVALID DOIs

  • None
    ```

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1012

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1012, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

Thanks, @kavir1698. Over to you, @openjournals/joss-eics.

Thank you, @jedbrown.

@kavir1698 congratulations!

Thank you very much, @Datseris

@kavir1698 β€” could you edit the metadata of the Zenodo deposit so the title matches the paper?

@labarba, the title is updated.

I jus finished reading the paperβ€”all good, except for the performance comparison in Figure 1, it would be nice if you provided the specs of the hardware used for the test!

I have added the following line to the figure caption:

"The comparison was performed on a Windows machine with a i7-6500U CPU and 16 GB of RAM."

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

🐦🐦🐦 πŸ‘‰ Tweet for this paper πŸ‘ˆ 🐦🐦🐦

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1014
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01611
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! πŸŽ‰πŸŒˆπŸ¦„πŸ’ƒπŸ‘»πŸ€˜

    Any issues? notify your editorial technical team...

Congratulations, @kavir1698, your JOSS paper is now published! πŸš€

Huge thanks to our editor: @jedbrown, and the reviewers: @Datseris, @mozhgan-kch β€” your contribution to JOSS is much appreciated πŸ™

Thank you all.

@arfon β€” I just looked at the PDF, and it doesn't contain the last commit by the author. Is @whedon not able to catch a very recent commit?

@labarba @arfon could the issue be that the change was made after the initial whedon accept command, and then whedon used that version with the deposit?

I don't think so. In another paper I just published, some copy edits I submitted via PR did not appear in the PDF doing a @whedon accept immediately after the merge, but after waiting a while, I ran it again, and they got caught.I have a feeling there's some caching delay or whatnot.

Here I did @whedon accept deposit=true too quick, I'm afraid...

I just regenerated the PDF locally and updated the joss-papers repo with the new PDF.

This is showing up as fixed for me now but might take a few hours to show up as modified for some of you as there's caching in place for the PDFs.

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01611/status.svg)](https://doi.org/10.21105/joss.01611)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01611">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01611/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01611/status.svg
   :target: https://doi.org/10.21105/joss.01611

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings