Joss-reviews: [REVIEW]: momepy: Urban Morphology Measuring Toolkit

Created on 14 Oct 2019  Β·  66Comments  Β·  Source: openjournals/joss-reviews

Submitting author: @martinfleis (Martin Fleischmann)
Repository: https://github.com/martinfleis/momepy
Version: v0.1.0
Editor: @drvinceknight
Reviewer: @martibosch, @geraintpalmer
Archive: 10.5281/zenodo.3544609

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/ee569f71301e610155a215c74e81b559"><img src="https://joss.theoj.org/papers/ee569f71301e610155a215c74e81b559/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/ee569f71301e610155a215c74e81b559/status.svg)](https://joss.theoj.org/papers/ee569f71301e610155a215c74e81b559)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@martibosch & @geraintpalmer, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @drvinceknight know.

✨ Please try and complete your review in the next two weeks ✨

Review checklist for @martibosch

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@martinfleis) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [ ] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @geraintpalmer

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@martinfleis) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
accepted published recommend-accept review

Most helpful comment

@openjournals/joss-eics this paper is ready to be accepted.

Thanks again to @geraintpalmer and @martibosch for your time and effort reviewing this work. :+1:

All 66 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @martibosch, @geraintpalmer it looks like you're currently assigned to review this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...

@drvinceknight - is there any news on these reviews? It seems like they haven't started yet?

No news but both @geraintpalmer and @martibosch had indicated it might take them a while. @geraintpalmer @martibosch, very appreciative of your time: just pinging in case this slipped of your horizon.

Hello! sorry for my delay. I had a quick look at the library yesterday, but I have to get some other things done before Friday. I will be getting to it momepy in more detail weekend, I hope this does not delay the reviewing process too much.

I threw an eye on the library yesterday

sounds awful - sorry

I am sorry about that. I have updated the comment, and I hope that the language used now is more appropriate. Thank you

Thanks - I was just a bit uncertain how to read the original version - thanks again for working on this review.

Apologies, getting to this today.

Submitted issue https://github.com/martinfleis/momepy/issues/103 to the original repository.

Submitted issue martinfleis/momepy#106 to the original repository.

@whedon generate pdf

A comment (part of my review), here (I believe raising a dedicated issue is not so appropriate).

I wonder whether (I say "wonder" because I am sure that the author @martinfleis has given it much more thought than I have) the object-oriented approach to the computation of characters (especially the dimension module, but also the shape, intensity and diversity modules).
Most of the classes consist of only the __init__ method, whose main purpose is to return a pandas series. I believe that such design is less intuitive for the end user. For instance, if I check the output of tess_car in a Python interpreter, I get <momepy.intensity.AreaRatio at 0x7f37a235a240>. I am most likely interested in the pandas series of area ratios of my tesselation, but why should I know that this is set as its ar attribute? Furthermore, since there are many characters implemented in momepy, the users need to know the respective attribute name each time (it seems that I'd suggest: either implementing the __repr__ method so that the user gets more information, or make them accessible through an attribute that has the same name accross each characters' class (e.g., series).

  • the __init__ method also sets some other attributes, but they are often just taken directly from the provided arguments, or result of very straight-forward computations (e.g., this line)
  • some classes have methods other than __init__ (e.g., this or this), but I have not seen any that could not be implemented outside the class

Therefore, I see no gains from the object-oriented approach (again, let me re-state that this is just my first impression and that I might be missing many possibilities that justify the author's adoption of the object-oriented design), other than design consistency, e.g., with the more convoluted elements.py module.
Of course, consistency is desirable, but I wonder whether the object-oriented design is the best approach for this case. From what I understand, users are interested in the series of computed characters, thus it might be more appropriate to implement the computation of characters as functions instead, e.g.:

def floor_area(gdf, heights, areas=None):
    # code to compute `fa` here
    return fa

Of course I do not expect the author to change the code organisation at this point (and again, there might be good reasons for the object-oriented approach that I am missing), but I just wanted to raise the question.

@martibosch Thank you for this comment. It is very valid and there has been a lot of discussion on this topic as it was originally designed as you propose - it simply returned pandas Series. However, this has been changed to object-oriented for several reasons.

First is the consistency within the package as some classes returns more than one series (like StreetProfile). Good point is the possible implementation of __repr__ and consistent attribute to get the resulting series (if there is only one), I'll certainly look to do both in the future. Thank you for that idea.

Second is the consistency with the PySAL, which is very close to the scope of momepy. When discussing how should be momepy designed, I mostly adopted their object-oriented approach (example - https://github.com/pysal/inequality/blob/master/inequality/gini.py).

As I expect momepy to grow, I also wanted to keep our classes more flexible and extensible than functions are. I expect that some more complex classes might get another attributes and it is preferred to keep them within one class, rather than having individual functions. Some surely won't, but then we, again, want consistency.

Finally, classes also store parameters used for each computation. Some of them generate additional information, for example, spatial weights or perimeter values under the hood. I see the value in the possibility to access them (as they can be reused - spatial weight might be expensive to compute - or explored). Returning them alongside the resulting series just complicates the design.

I have to admit that this decision was not straightforward, but in the end I felt that object-oriented approach is a bit more favourable here. But I am open to discussion if the decision was right.

@martinfleis thank you for the explanations, which have been very convincing. Again, you indeed know momepy way better than I do, so after your explanations, I fully trust your criteria. As a reviewer, I just wanted to make sure that such matters had been properly consider - which I know by now, so as far as I am concerner, I believe we can move on with the review.
I have a couple more issues that I will submit tomorrow and then I will proceed to reviewing the manuscript. Cheers!

Submitted issue martinfleis/momepy#109 to the original repository

Submitted issue martinfleis/momepy#110 to the original repository

Submitted issue martinfleis/momepy#111 to the original repository

This should be all from my side. I would just like to add a final note which I do not intend to be part of the reviewing. The documentation is provided as a Jupyter book that is hosted at a separate gh-pages branch of the repository. This is the first time that I see such an approach, and I have some concerns about its maintainability, since unless I am missing something, it seems that two independent git branches (that will never be merged) are hosted within the same repo. Wouldn't a separate repository, such as osmnx-examples be a better approach? In any case, I believe that a binder badge would be a nice addition.
On the other hand, the notebooks of the user guide could be added as part of the Travis CI workflow (even with the current single repository with a separate gh-pages branch), so that the example notebooks run properly in each commit. This could potentially avoid some of the issues that I have encountered when running them and add extra robustness to the library. At the risk of "shameless self promotion", the travis setup of PyLandStats and this Makefile might be of help.

In any case, I believe that this is a great package that paves the way for automated and reproducible analyses of urban morphology at an unprecedented scale.

Wouldn't a separate repository, such as osmnx-examples be a better approach? In any case, I believe that a binder badge would be a nice addition.

I have to admit that such an obvious idea did not came to my mind. Absolutely, I'll do that once I'll find some spare time. I have tried to make binder work but gave up after a while. It is on my to-do list since then.

the notebooks of the user guide could be added as part of the Travis CI workflow

That is a good point. I'll check your repo for an inspiration and include them (once migrated to separate repository).

Thank you for you thorough review. It is extremely helpful. I'll try to deal with the issues you have opened ASAP.

@martibosch All issues you have opened should be fixed and merged into the master now. Thanks again!

@martibosch @geraintpalmer Is there anything else I can do towards finishing this review? From our discussions it seemed to me that you have finished, but looking at the checkboxes above it is apparently not the case yet.

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@martinfleis sorry that I forgot about the checkboxes. There is only one final checkbox that I am still leaving unchecked: not all the references have a DOI, which conflicts with the review checklist.

@martibosch those without DOI do not have one at all.

in such case, I believe that I can put a check in the β€œReferences” checkbox, am I wrong @drvinceknight?

I am happy with the paper now. Thanks @martinfleis for working through the issues.

in such case, I believe that I can put a check in the β€œReferences” checkbox, am I wrong @drvinceknight?

@martinfleis are there other alternative references with DOIs that you could include (not necessarily instead of but perhaps with)?

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@drvinceknight I am not aware of any. I looked for them when I wrote the paper, this is the best I was able to find. I have changed SciPy reference based on https://www.scipy.org/citing.html but that is probably all I can do.

That's fine and apologies for the delay in getting back to you (I've not been well).

@whedon generate pdf

Attempting PDF compilation. Reticulating splines etc...

@whedon check references

Attempting to check references...

```Reference check summary:

OK DOIs

  • 10.1177/2399808319832612 is OK
  • 10.1016/j.compenvurbsys.2012.11.001 is OK
  • 10.1177/2399808317725075 is OK
  • 10.5281/zenodo.3463125 is OK
  • 10.1002/9781118747711 is OK
  • 10.3166/rig.22.287-305 is OK

MISSING DOIs

INVALID DOIs

  • None
    ```

@martinfleis it looks like https://doi.org/10.1007/978-3-642-03647-7_11 is a DOI for one of the papers you cite, could you add that please?

Once you've done that if you could make a tagged release and archive, and report the version number and archive DOI here please.

@drvinceknight Thank you! This is another paper, but I exchanged them as they both talk about the same and this one has a DOI.

I just made a release v.0.1.0 (https://github.com/martinfleis/momepy/releases/tag/v0.1.0)
Zenodo DOI for it is 10.5281/zenodo.3544609 (https://zenodo.org/record/3544609#.XdB3ui9OzOQ)

Hi @drvinceknight, do you think we'll be able to finish this review this week? I need to understand whether I can reference it elsewhere (and hence have an active doi) or not yet. Thanks!

@whedon check references

Attempting to check references...

```Reference check summary:

OK DOIs

  • 10.1177/2399808319832612 is OK
  • 10.1016/j.compenvurbsys.2012.11.001 is OK
  • 10.1177/2399808317725075 is OK
  • 10.5281/zenodo.3483425 is OK
  • 10.1002/9781118747711 is OK
  • 10.1007/978-3-642-03647-7_11 is OK
  • 10.3166/rig.22.287-305 is OK

MISSING DOIs

  • None

INVALID DOIs

  • None
    ```

Hi @drvinceknight, do you think we'll be able to finish this review this week? I need to understand whether I can reference it elsewhere (and hence have an active doi) or not yet. Thanks!

Yup, we should definitely be done by end of the week. Could you modify the archive metadata to match the paper. Specifically the title needs to be: momepy: Urban Morphology Measuring Toolkit

@drvinceknight Thank you. Archive edited to match: https://zenodo.org/record/3544609#.XdUnkS9OzOQ

@whedon set 10.5281/zenodo.3544609 as archive

OK. 10.5281/zenodo.3544609 is the archive.

@whedon set v0.1.0 as version

OK. v0.1.0 is the version.

@openjournals/joss-eics this paper is ready to be accepted.

Thanks again to @geraintpalmer and @martibosch for your time and effort reviewing this work. :+1:

Great! I have a few steps to run through now.

@whedon accept

Attempting dry run of processing paper acceptance...

Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/1107

If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/1107, then you can now move forward with accepting the submission by compiling again with the flag deposit=true e.g.
@whedon accept deposit=true

```Reference check summary:

OK DOIs

  • 10.1177/2399808319832612 is OK
  • 10.1016/j.compenvurbsys.2012.11.001 is OK
  • 10.1177/2399808317725075 is OK
  • 10.5281/zenodo.3483425 is OK
  • 10.1002/9781118747711 is OK
  • 10.1007/978-3-642-03647-7_11 is OK
  • 10.3166/rig.22.287-305 is OK

MISSING DOIs

  • None

INVALID DOIs

  • None
    ```

@whedon accept deposit=true

Doing it live! Attempting automated processing of paper acceptance...

🐦🐦🐦 πŸ‘‰ Tweet for this paper πŸ‘ˆ 🐦🐦🐦

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited :point_right: https://github.com/openjournals/joss-papers/pull/1108
  2. Wait a couple of minutes to verify that the paper DOI resolves https://doi.org/10.21105/joss.01807
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! πŸŽ‰πŸŒˆπŸ¦„πŸ’ƒπŸ‘»πŸ€˜

    Any issues? notify your editorial technical team...

Congrats on your newly published paper @martinfleis!!! Thanks to editor @drvinceknight and to reviewers @martibosch and @geraintpalmer for your time and expertise. πŸŽ‰ πŸŽ‰

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippets:

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.01807/status.svg)](https://doi.org/10.21105/joss.01807)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01807">
  <img src="https://joss.theoj.org/papers/10.21105/joss.01807/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.01807/status.svg
   :target: https://doi.org/10.21105/joss.01807

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings