Joss-reviews: [REVIEW]: NetworkChange: Analyzing Network Changes in R

Created on 1 Oct 2020  ·  31Comments  ·  Source: openjournals/joss-reviews

Submitting author: @ysohn (Yunkyu Sohn)
Repository: https://github.com/jongheepark/NetworkChange
Version: v0.6
Editor: @kakiac
Reviewer: @akbaritabar, @marcjwilliams1, @martinmodrak
Archive: Pending

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/cb2034afbb7ea28d8c94bc4b421034b6"><img src="https://joss.theoj.org/papers/cb2034afbb7ea28d8c94bc4b421034b6/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/cb2034afbb7ea28d8c94bc4b421034b6/status.svg)](https://joss.theoj.org/papers/cb2034afbb7ea28d8c94bc4b421034b6)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@akbaritabar & @marcjwilliams1 & @martinmodrak, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @kakiac know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Review checklist for @akbaritabar

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@ysohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [ ] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @marcjwilliams1

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@ysohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [ ] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [x] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

Review checklist for @martinmodrak

Conflict of interest

  • [x] I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Contribution and authorship: Has the submitting author (@ysohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [ ] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [ ] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [ ] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • [ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [ ] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • [x] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • [ ] References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?
R TeX review

Most helpful comment

So I did a first round of review. I think I can complement @akbaritabar in that I've never done network analysis, but I have background in Bayesian inference and even some experience with fitting HMMs. I however have to be clear that my exposure to Bayesian methods is primarily via the Stan language and the associated community (e.g. Gelman, Vehtari, Betancourt, Simpson) which tends to have opinionated worldview on how Bayesian statistics should be done which may not necessarily match the consensus of the whole field. So feel free to challenge specific propositions/suggestions I make.

The problem attacked by the package is hard. My first impression upon reading the associated Bayesian analysis paper was "I wonder if this kind of model can be fit reliably with any available method.". To some extent my suspicion has been vindicated - I've noticed some problems that threaten the validity of the computation (https://github.com/jongheepark/NetworkChange/issues/6), so this is currently my biggest concern. In the case I am not wrong about the issues (which I might be), resolving them completely might turn out to be a significant research project on its own and I don't want to force the authors to do that. I would generally be happy if the package provides diagnostics and warnings to the user in case something looks fishy.

The paper itself (and the package) relies heavily on the Bayesian analysis paper for any deeper explanations. I am mostly OK with this, although I can see how adding at least rough description of the actual model to the JOSS paper would be beneficial. I will be happy if @kakiac as the editor weighs in if this is appropriate.

All 31 comments

Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @akbaritabar, @marcjwilliams1, @martinmodrak it looks like you're currently assigned to review this paper :tada:.

:warning: JOSS reduced service mode :warning:

Due to the challenges of the COVID-19 pandemic, JOSS is currently operating in a "reduced service mode". You can read more about what that means in our blog post.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@whedon generate pdf

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.4135/9781604265781 is OK

MISSING DOIs

- 10.18637/jss.v024.i05 may be a valid DOI for title: Fitting position latent cluster models for social networks with latentnet
- 10.1214/19-ba1147 may be a valid DOI for title: Detecting Structural Changes in Longitudinal Network Data
- 10.1214/15-aoas839 may be a valid DOI for title: Multilinear tensor regression for longitudinal relational data
- 10.21236/ada458734 may be a valid DOI for title: Latent space approaches to social network analysis

INVALID DOIs

- None

Editorial Comments carried over from [Pre-review] (@kakiac to check): https://github.com/openjournals/joss-reviews/issues/2471#issuecomment-656894068

  • [x] Your software repository does not appear to include an open source license
  • [x] Your paper is missing an explicit Statement of Need section, which is now required for articles as described in our documentation
  • [x] Please remove these lines from your paper's YAML header: aas-doi: 10.3847/xxxxx and aas-journal: Journal of Open Source Software.
  • [ ] Also, overall your paper is quite short. Although JOSS papers are generally only one or two pages on average (250-1000 words), your paper just barely passes that mark. I would recommend adding a bit more content, perhaps an example or more explanation/details of how the package works. Consider adding elements from this page: https://cran.r-project.org/web/packages/NetworkChange/vignettes/NetworkChange.html
  • [ ] It looks like a few of your references may be missing DOIs - see https://github.com/jongheepark/NetworkChange/pull/1

👋🏼 @ysohn @akbaritabar, @marcjwilliams1, @martinmodrak this is the review thread for the paper. All of our communications will happen here from now on.

All reviewers should have checklists at the top of this thread with the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#2708 so that a link is created to this thread (and I can keep an eye on what is happening). Please also feel free to comment and ask questions on this thread. In my experience, it is better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 6 weeks. Please let me know if any of you require some more time. We can also use Whedon (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@kakiac) if you have any questions/concerns.

@kakiac We understand. Thank you very much! Also we deeply appreciate for the reviewers' time and efforts.

@akbaritabar, @marcjwilliams1, @martinmodrak - how are you getting on with your reviews here? Please note we try to _complete_ the review process in 6 weeks which means that it's helpful for you all to provide initial feedback to the authors earlier than this.

I don't seem to be able to check off items on the checklist, if I click through to https://github.com/openjournals/joss-reviews/invitations as suggested I get the following message
Sorry, we couldn't find that repository invitation. It is possible that the invitation was revoked or that you are not logged into the invited account.

Still doesn't seem like I can edit the checklist... Should I be able to just check the boxes or is there more to it? (Sorry if this is a stupid question!)

@whedon re-invite @marcjwilliams1 as reviewer

OK, the reviewer has been re-invited.

@marcjwilliams1 please accept the invite by clicking this link: https://github.com/openjournals/joss-reviews/invitations

@marcjwilliams1 - can you try clicking the invite link again now? I believe the invites expire after a week.

@arfon working now, thanks.

Letter to authors

Thank you for your trust and allowing me to take part in this interesting experience as I am studying peer review as a subject of research and the way JOSS is handling it was really interesting to me.

I must start by saying that I am a social scientist who uses R, Python and network analysis tools and techniques on daily basis. That means I have evaluated the submission with the above background and most of my points below are not technical at all. Reading the package paper, vignette and the authors' 2020 paper in Bayesian Analysis journal, I was ready to give up on the review and state a lack of _technical_ confidence. Then I decided to act as an _advanced beginner_ user of the package and try to provide feedback on my experience that _might_ be useful for authors and the editor (I certainly hope so). I will state some main points that I faced and below some minor suggestions. Further below I have copied some of the points from the review checklist and added comments on them that I needed to describe.

A bit more description about the empirical network data used would be nice (e.g., I found this in the 2020 paper by the authors in Bayesian Analysis journal which made it much clearer for me (quoting this from that paper's text) "The structure of military alliance networks reflects the distribution of military power among coalitions of states, which changes over time in response to exogenous shocks to the international system or endogenous network dynamics. However, there has been no study that investigates changes in coalition structures of military alliance networks over time using a principled statistical approach. A main reason is the lack of statistical methods that model unknown structural changes in longitudinal network data.").

Largest possible size of the network and longitudinal waves that can be analyzed and visualized by the package is not mentioned in paper or vignette. For the current example in vignette, there was the "BreakDiagnostic" step that on my laptop (core i 7, 16GB RAM) took a few minutes to be done, I was wondering if authors can give estimates of possible time it can take and what factors can affect that in network attributes and analysis terms for users to have an idea.

An example of how objects from popular R libraries for network analysis (e.g., igraph and statnet suite) or external formats (e.g., Pajek plain text format) can be converted and used with NetworkChange will be really helpful. At first it feels like array of temporal snapshots is easy to build, but it might not be for all potential users. I took the time to test it on a fake co-authorship network and it was not as straightforward as I hoped it would be (please see above in first paragraph that I am an advanced beginner and it might only be my problem). I used a bipartite edgelist saved in CSV including source/target columns for authors and publication nodes and a publication year column (see below for the example). I constructed a bipartite graph with igraph, then induced yearly subgraphs (to have temporal dimension to explore with NetworkChange) and then projected them to one-mode (authors). I then used array from R to get close to the MajourAlly shape, but I failed and couldn't understand how I need to provide the dimensions to then use NetworkChange on the sample. Providing a bit of description on this would allow the package to be used by a wider range of audience.

This is example of bipartite ties I tried to convert and used and failed in the latest step:

# example data (edges table) in csv format to copy/paste

source,target,PUBYEAR
author_1,paper_2,2007
author_2,paper_2,2007
author_3,paper_2,2007
author_4,paper_2,2007
author_5,paper_2,2007
author_1,paper_3,2011
author_2,paper_3,2011
author_3,paper_3,2011
author_4,paper_3,2011
author_5,paper_3,2011
author_1,paper_4,2012
author_2,paper_4,2012
author_3,paper_4,2012
author_4,paper_4,2012
author_5,paper_4,2012
author_1,paper_1,2013
author_2,paper_1,2013
author_3,paper_1,2013
author_4,paper_1,2013
author_5,paper_1,2013


# example data (vertices table) in csv format to copy/paste

vertices,type
author_1,TRUE
author_2,TRUE
author_3,TRUE
author_4,TRUE
author_5,TRUE
paper_1,FALSE
paper_2,FALSE
paper_3,FALSE
paper_4,FALSE


# Preparing to use with NetworkChange
require(tidyverse)
require(igraph)

edges_table <- read_csv('./edges.csv') %>% 
  arrange(PUBYEAR)

vertices_table <- read_csv('./vertices.csv')

# build a bipartite paper-author graph
g <- graph_from_data_frame(d = edges_table, vertices = vertices_table, directed = F)

yearly_g <- list()
for (year in unique(edges_table$PUBYEAR)) {
  # delete.vertices = FALSE because we need same dim for all matrices
  g_year <- subgraph.edges(g, E(g)[PUBYEAR == year], delete.vertices = FALSE)
  g_aut <- bipartite.projection(g_year, which = 'true')
  g_adj <- as.matrix(get.adjacency(g_aut, edges = F, names = T))
  yearly_g[[paste0('y_', as.character(year))]] <- g_adj
}

# Here I am not sure how to give the right dimensions and fail in building the right array that NetworkChange will use
arr_yearly_g <- array(yearly_g)

### the error I get is like this:

# use networkchange
G <- 100
set.seed(1990)
test.run <- NetworkStatic(arr_yearly_g, R=2, mcmc=G, burnin=G, verbose=0,
                          v0=10, v1=4*2)

# Error in array(Y, dim = c(dim(Y)[1], dim(Y)[2], 1)) : 
#   negative length vectors are not allowed

Below I present some minor points as suggestions to be revised in text. I am sorry that I don't have much technical suggestions to offer and I hope other reviewers and editor would provide enough comments for those aspects.

Minor points

In JOSS paper

  • "The complete guide for using core functions of the package is presented at https://github.com/jongheepark/NetworkChange as its vignette with an empir- ical data set analysis example." -> This is the link to repository not the vignette (https://cran.r-project.org/web/packages/NetworkChange/vignettes/NetworkChange.html)
  • In both paper and the vignette (see below), figure 1 ("Summary of selected features and functions of the package.") is used but in text there no description of it. It is a nice figure summarizing the main features and function names, it would be nicer to have a few sentences describe it (IMO).
  • "library(NetworkChange)" is missing from the first code chunk of the paper (it is present in the vignette).
  • "We follow the COW dataset’s coding of “major powers” (the United Kingdom, Germany, Austria-Hungary, France, Italy, Russia, the United States, Japan, and China) in the analysis. We aggregated every 2 year network from the original annual binary networks to increase the density of each layer." -> in the script accompanying this part you are dropping CHN and USA since they don't have ties to other nodes, but in text it is not mentioned that they are dropped.

    • "Users can choose the number of clusters in each regime by ‘n.cluster}" -> "" instead of "}"

In Vignette

  • "Users can change plot settings by changing options in ggnet." -> maybe, by changing options which _are passed to_ ggnet?
  • "In this section, We estimate the chagepoints of" -> changepoints
  • "We also highlight the latent cluster structure using the k-means clustering method over the estimated latent node positions of each regime. drawPostAnalysis() provides an option to include k-means clustering results of the latent node positions for each regime." -> I felt the two sentences were similar.
  • "In this section, we analyze changes in the international military alliance network among major powers. The data set is originally from (???)" -> reference in parenthesis is missing from the vignette.
  • "Our goal in this section is to detect structural changes in the longitudinal alliance network among major powers using HNC. We follow the COW dataset’s coding of “major powers” (the United Kingdom, Germany, Austria-Hungary, France, Italy, Russia, the United States, Japan, and China) in the analysis. We aggregated every 2 year network from the original annual binary networks to increase the density of each layer." -> in the script accompanying this part you are dropping CHN and USA since they don't have ties to other nodes, but in text it is not mentioned that they are dropped.
  • "Users can choose the number of clusters in each regime by ‘n.cluster}" -> "" instead of "}"
  • Figure captions present in the paper are missing from the vignette, but I am not sure if the vignette would support them or not? If it does, it would be nice to have them here as well.
  • "International military alliances, 1648-2008. (2009). Washington, DC: CQ Press. doi:10.4135/ 9781604265781" -> Reference to empirical data used is missing from vignette

In package documentation PDF from CRAN

  • "orthgonalization" -> "orthogonalization"

Points from Review checklist that I needed to elaborate

  • General checks

    • [x] Contribution and authorship: Has the submitting author (@ysohn) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?



      • The submitting author seems to have joined recently (https://github.com/jongheepark/NetworkChange/graphs/contributors) but the first author have worked on this repository from late 2017. The current order of authorship seems to respect that IMO.



    • [x] Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines



      • There are two peer reviewed publications (2017 and 2020) based on the methodology as far as I can see and the lead author is the maintainer of CRAN task view on Bayesian analysis, I feel confident the contribution would be good. But I might not be technically qualified to judge that.



  • Functionality

    • [x] Installation: Does installation proceed as outlined in the documentation?



      • Installation from CRAN and latest version from Github went fine on Mac OS.


      • But, once installed from github, the vignette of package is not found in R with vignette(package = "NetworkChange") no vignettes found



  • Documentation

    • [ ] Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?



      • Vignette and paper gives empirical example and scripts to see how functions work. But I am not sure if I can find automated tests.



    • [ ] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support



      • On the package repository or its Wiki there is no guide on this, but Github issue tracker and pull requests should be the intended ones, I suppose.



  • Software paper

    • [x] Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?



      • The current description can be further improved. For example, for me as someone who knows about network analysis, block-modeling, community detection and similar techniques, I cannot say the whole methodology and descriptions are clear but that could be my lack of enough background knowledge.



    • [ ] Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?



      • I have given minor suggestions for paper and the vignette below.



Overall everything looks fine, I get the same outputs when I run the vignette but would be nice to see some automated tests jongheepark/NetworkChange#3 . A note on community guidelines should also be added somewhere jongheepark/NetworkChange#4 .

👋🏼 @akbaritabar, @marcjwilliams1 many thanks for your reviews and comprehensive comments.

@ysohn can you please start addressing the reviewers' comments? I will be creating issues in your repo from @akbaritabar's comments to help us keep track of the comments to be addressed 😄

@martinmodrak I see that you have not checked any of your checklist items - can you edit it or are you having similar problems like @marcjwilliams1 ?

@whedon re-invite @martinmodrak as reviewer

@martinmodrak already has access.

@akbaritabar, @marcjwilliams1 Thank you so much for your reviews!

@kakiac We will address the issues shortly and commit on github starting next week. Thank you!

Sorry for not being active, some life stuff was happening. I'll have my review ready by Monday.

So I did a first round of review. I think I can complement @akbaritabar in that I've never done network analysis, but I have background in Bayesian inference and even some experience with fitting HMMs. I however have to be clear that my exposure to Bayesian methods is primarily via the Stan language and the associated community (e.g. Gelman, Vehtari, Betancourt, Simpson) which tends to have opinionated worldview on how Bayesian statistics should be done which may not necessarily match the consensus of the whole field. So feel free to challenge specific propositions/suggestions I make.

The problem attacked by the package is hard. My first impression upon reading the associated Bayesian analysis paper was "I wonder if this kind of model can be fit reliably with any available method.". To some extent my suspicion has been vindicated - I've noticed some problems that threaten the validity of the computation (https://github.com/jongheepark/NetworkChange/issues/6), so this is currently my biggest concern. In the case I am not wrong about the issues (which I might be), resolving them completely might turn out to be a significant research project on its own and I don't want to force the authors to do that. I would generally be happy if the package provides diagnostics and warnings to the user in case something looks fishy.

The paper itself (and the package) relies heavily on the Bayesian analysis paper for any deeper explanations. I am mostly OK with this, although I can see how adding at least rough description of the actual model to the JOSS paper would be beneficial. I will be happy if @kakiac as the editor weighs in if this is appropriate.

I should also add that installation was easy and following the examples was also easy, so definitely good job there.

Also, I quite like the plots :-)

@whedon check references

@whedon generate pdf

:point_right::page_facing_up: Download article proof :page_facing_up: View article proof on GitHub :page_facing_up: :point_left:

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

OK DOIs

- 10.4135/9781604265781 is OK

MISSING DOIs

- 10.18637/jss.v024.i05 may be a valid DOI for title: Fitting position latent cluster models for social networks with latentnet
- 10.1214/19-ba1147 may be a valid DOI for title: Detecting Structural Changes in Longitudinal Network Data
- 10.1214/15-aoas839 may be a valid DOI for title: Multilinear tensor regression for longitudinal relational data
- 10.21236/ada458734 may be a valid DOI for title: Latent space approaches to social network analysis

INVALID DOIs

- None

Sorry for not being active, some life stuff was happening. I'll have my review ready by Monday.

No problem @martinmodrak, it's challenging times, we appreciate your help with this :)

@martinmodrak No problem. Wish you the best for the issues.. We deeply appreciate your comments.

Sorry for not being active, some life stuff was happening. I'll have my review ready by Monday.

@kakiac Thank you very much for compiling the issues!

👋 @ysohn, hope you are well, I have tried to organise the reviewers' comments up to now in separate issues in your repository to help us all track progress :). If you hover over them, you can see their status.

@akbaritabar, @marcjwilliams1, @martinmodrak, @kakiac

Dear Reviewers and Editor,

We deeply appreciate your detailed reviews and suggestions. It looks like some of the issues may take time for us to resolve. We will try to complete addressing the issues by January.

Wish all of you stay healthy, and have warm holidays!

All my best

Was this page helpful?
0 / 5 - 0 ratings