Joss-reviews: [REVIEW]: Limbo: A Flexible High-performance Library for Gaussian Processes modeling and Data-Efficient Optimization

Created on 23 Jan 2018  ·  21Comments  ·  Source: openjournals/joss-reviews

Submitting author: @aneoshun (Antoine Cully)
Repository: https://github.com/resibots/limbo
Version: V2.0
Editor: @arfon
Reviewer: @dfm
Archive: 10.5281/zenodo.1298561

Status

status

Status badge code:

HTML: <a href="http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0"><img src="http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0/status.svg"></a>
Markdown: [![status](http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0/status.svg)](http://joss.theoj.org/papers/ffe389ddf82a09b8397e6fb42c771ff0)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@dfm, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:

  1. Make sure you're logged in to your GitHub account
  2. Be sure to accept the invite at this URL: https://github.com/openjournals/joss-reviews/invitations

The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @arfon know.

### Conflict of interest

Code of Conduct

General checks

  • [x] Repository: Is the source code for this software available at the repository url?
  • [x] License: Does the repository contain a plain-text LICENSE file with the contents of an OSI approved software license?
  • [x] Version: Does the release version given match the GitHub release (V2.0)?
  • [x] Authorship: Has the submitting author (@aneoshun) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?

Functionality

  • [x] Installation: Does installation proceed as outlined in the documentation?
  • [x] Functionality: Have the functional claims of the software been confirmed?
  • [x] Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • [x] Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • [x] Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • [x] Automated tests: Are there automated tests or manual steps described so that the function of the software can be verified?
  • [x] Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • [x] Authors: Does the paper.md file include a list of authors with their affiliations?
  • [x] A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • [x] References: Do all archival references that should have a DOI list one (e.g., papers, datasets, software)?
accepted published recommend-accept review

Most helpful comment

Hi @arfon and @Aneoshun,

Thanks (again) for your patience!

This looks really great. I think that the docs are much clearer now and I'm happy to check off the rest of the checkboxes and recommend this for publication. Thanks again and congrats!

All 21 comments

Hello human, I'm @whedon. I'm here to help you with some common editorial tasks. @dfm it looks like you're currently assigned as the reviewer for this paper :tada:.

:star: Important :star:

If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews 😿

To fix this do the following two things:

  1. Set yourself as 'Not watching' https://github.com/openjournals/joss-reviews:

watching

  1. You may also like to change your default settings for this watching repositories in your GitHub profile here: https://github.com/settings/notifications

notifications

For a list of things I can do to help you, just type:

@whedon commands
Attempting PDF compilation. Reticulating splines etc...
https://github.com/openjournals/joss-papers/blob/joss.00545/joss.00545/10.21105.joss.00545.pdf

@dfm - thanks for agreeing to review this submission. Any questions along the way please shout!

Hi all. I've had a crazy week so I probably won't get to this until next week. Please feel free to ping me if you don't hear anything from me by the end of next week. Sorry for the delay!

Hi @dfm,
Sure, of course.
Good luck with your crazy week, and thank you in advance for your time. We really appreciate it.
Best regards,

Hi @dfm,
I hope you are doing well and that you managed to go through your crazy week as well as you wanted.
As discussed, this is a friendly reminder regarding this review.

Best regards,

Hi all. I've had a crazy week so I probably won't get to this until next week. Please feel free to ping me if you don't hear anything from me by the end of next week. Sorry for the delay!

Friendly reminder to get to this sometime soonish @dfm

:wave: @dfm - any chance you can take a look at this soon?

Ugh. Sorryyyy! This all looks great, but I haven't found time to actually go through and check all the boxes and give feedback. I will make sure that I do before the end of the day on Wednesday. Sorry again!

Hi team, thank you so much for you patience here! I've gone through this and I've included some comments below. This is a really impressive piece of software and I am excited to use it in my own work. I think that there are a few things that could improve the documentation make everything consistent with the JOSS guidelines, but it shouldn't be too onerous.

Installation:

  • Overall, installation isn't too bad and the required dependencies can be easily installed following the directions here. Some of the optional deps were a bit annoying (classic C++ :smile:), but the docs here were complete.
  • The science tap on Homebrew has been discontinued so brew install homebrew/science/nlopt throws an error. Perhaps the suggestion of using that should be removed.

Performance:

  • There are strong claims in the paper when it comes to performance, but I didn't find that there was sufficient discussion on the benchmark pages of the docs to demonstrate this clearly. I think that it would be useful to include more discussion of what exactly is being tested in each experiment, why it matters, and a paragraph discussing the interpretation of the figures.
  • It's not entirely obvious to me that GPy is the right benchmark for the GP comparisons. I'm not really sure what would be better, but there should (at least) be some discussion of the fact that GPy is much more feature rich and easy to use. The focus of GPy is not performance and it is written in pure-Python (+numpy).

Documentation:

  • Overall the documentation is pretty complete, but I think that it would be much more useful if some of the earlier tutorial pages include more discussion of what is going on. Right now, the tutorials jump into source code pretty quickly and I think that a bit more theory (right on the tutorial pages!) would be useful.
  • Specifically related to the JOSS requirements, some of the text from the paper could probably be added to the documentation home page to make the statement of need clearer.
  • It would also be very useful to have examples of how to visualize the output of what is going on in each step of the optimization and tips on how to identify/debug models that aren't performing as expected.
  • In the Quick Start example, ./waf --exp test fails with Cannot read the folder 'PATH_TO_LIMBO/limbo/exp/test' for the correct value of PATH_TO_LIMBO.
  • I think that the Basic Example needs a longer introduction to explain what it going on. In my experience, this is where users will try to start, and I think that adding more details here would go a long way.
  • The full source code for the Basic Example includes the necessary includes and namespace, but the code snippets above should too.
  • Again, I think that the Basic Example needs more discussion at the end about what to expect as output, how to interpret it, and how to visualize what it going on.
  • The Advanced Example should probably have a listing of the full source code as well. Currently, the snippet for eval_func is missing the template and, even after fixing that, the code won't build on my machine (after I copied the listings directly). The error log is here: advanced.log

Thanks again for your patience! I hope that these comments are useful for improving the impact of this impressive library. Let me know if you have any questions.

@Aneoshun - please let me know when you've had a chance to incorporate @dfm's review feedback.

Dear @dfm and @arfon,

Thank you very much for your comments and your patience. We have addressed all of them and we believe that we made the documentation better.

You can see the changes that we made in this pull request: https://github.com/resibots/limbo/pull/257. Please, also find bellow our response to your comments:

Installation:

  • Overall, installation isn't too bad and the required dependencies can be easily installed following the directions here. Some of the optional deps were a bit annoying (classic C++ 😄), but the docs here were complete.

    • We are happy to read this.

  • The science tap on Homebrew has been discontinued so brew install homebrew/science/nlopt throws an error. Perhaps the suggestion of using that should be removed.

    • We have changed the documentation and advised to install nlopt via: brew install nlopt (http://www.resibots.eu/limbo/tutorials/compilation.html)

Performance:

  • There are strong claims in the paper when it comes to performance, but I didn't find that there was sufficient discussion on the benchmark pages of the docs to demonstrate this clearly. I think that it would be useful to include more discussion of what exactly is being tested in each experiment, why it matters, and a paragraph discussing the interpretation of the figures.

    • We have significantly extended and restructured the discussions on the two benchmark pages (http://www.resibots.eu/limbo/reg_benchmarks.html and http://www.resibots.eu/limbo/bo_benchmarks.html). In particular, we have added several paragraphs to introduce the benchmarks and give more details about the compared methods.

  • It's not entirely obvious to me that GPy is the right benchmark for the GP comparisons. I'm not really sure what would be better, but there should (at least) be some discussion of the fact that GPy is much more feature rich and easy to use. The focus of GPy is not performance and it is written in pure-Python (+numpy).

    • We have added another library in our regression benchmark: LibGP, which is a C++ library for Gaussian Processes (https://github.com/mblum/libgp). We also included in the discussion of this page (http://www.resibots.eu/limbo/reg_benchmarks.html), a paragraph that explains that GPy is a python library with much more feature and designed to be easy to use. Moreover, GPy can achieve comparable performance with C++ libraries in the hyper-parameters optimization part because it utilizes numpy and scipy that is basically calling C code with MKL bindings (which is almost identical to what we are doing in Limbo).

Documentation:

  • Overall the documentation is pretty complete, but I think that it would be much more useful if some of the earlier tutorial pages include more discussion of what is going on. Right now, the tutorials jump into source code pretty quickly and I think that a bit more theory (right on the tutorial pages!) would be useful.

    • We offer in the document a quick introduction to Bayesian Optimization in the Guide “ Introduction to Bayesian Optimization (BO)” (http://www.resibots.eu/limbo/guides/bo.html) in which we introduce the main concepts of BO. However, we agree that such concepts need to be known before starting the tutorial of Limbo. For this reason, and following your suggestions, we added a sentence at the beginning of the basic_example that invite interested readers to refer to this introduction (http://www.resibots.eu/limbo/tutorials/basic_example.html).

  • Specifically related to the JOSS requirements, some of the text from the paper could probably be added to the documentation home page to make the statement of need clearer.

    • We have added several paragraphs from the paper in the home page of the documentation following your advice (http://www.resibots.eu/limbo/index.html).
  • It would also be very useful to have examples of how to visualize the output of what is going on in each step of the optimization and tips on how to identify/debug models that aren't performing as expected.

    • We have included in the basic example a small python script to display the values observed at iterations by Limbo (http://www.resibots.eu/limbo/tutorials/basic_example.html).
  • In the Quick Start example, ./waf --exp test fails with Cannot read the folder 'PATH_TO_LIMBO/limbo/exp/test' for the correct value of PATH_TO_LIMBO.

    • We have not been able to reproduce this problem. Are you sure that you ran the command: ./waf --create test before attempting to compile (as indicated in the documentation)?
  • I think that the Basic Example needs a longer introduction to explain what it going on. In my experience, this is where users will try to start, and I think that adding more details here would go a long way.

    • We have added several lines to introduce better the Basic Example. In particular, we highlight the objective of this example and links to the definition of the main concepts (http://www.resibots.eu/limbo/tutorials/basic_example.html).
  • The full source code for the Basic Example includes the necessary includes and namespace, but the code snippets above should too.

    • Thank you for pointing this out. We have modified the Basic Example page accordingly (http://www.resibots.eu/limbo/tutorials/basic_example.html).
  • Again, I think that the Basic Example needs more discussion at the end about what to expect as output, how to interpret it, and how to visualize what it going on.

    • We have extended the Basic Example to include a typical result obtained after running the example. In particular, we details the files that are produced by Limbo and how to interpret them (http://www.resibots.eu/limbo/tutorials/basic_example.html).
  • The Advanced Example should probably have a listing of the full-time source code as well. Currently, the snippet for eval_func is missing the template and, even after fixing that, the code won't build on my machine (after I copied the listings directly). The error log is here: advanced.log

    • We have fixed the issues you mentioned and provided the full code of the Advance Example (http://www.resibots.eu/limbo/tutorials/advanced_example.html).

In general, we believe that your comments really helped us to improve the documentation and the library. We hope you will like these changes.

Best regards,

:wave: @dfm - please take another look at this when you get a chance.

Hi @arfon and @Aneoshun,

Thanks (again) for your patience!

This looks really great. I think that the docs are much clearer now and I'm happy to check off the rest of the checkboxes and recommend this for publication. Thanks again and congrats!

Thanks @dfm

@Aneoshun - At this point could you make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission.

Hi @arfon

Here is the DOI from Zenodo: 10.5281/zenodo.1298561

@whedon generate pdf

@whedon set 10.5281/zenodo.1298561 as archive

OK. 10.5281/zenodo.1298561 is the archive.

@dfm - many thanks for your review here ✨

@jbmouret - your paper is now accepted into JOSS and your DOI is https://doi.org/10.21105/joss.00545 ⚡️:rocket: :boom:

:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:

If you would like to include a link to your paper from your README use the following code snippet:

[![DOI](http://joss.theoj.org/papers/10.21105/joss.00545/status.svg)](https://doi.org/10.21105/joss.00545)

This is how it will look in your documentation:

DOI

We need your help!

Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

Was this page helpful?
0 / 5 - 0 ratings