Submitting author: @mikaem (Mikael Mortensen)
Repository: https://github.com/spectralDNS/shenfun
Version: 1.2.1
Editor: @katyhuff
Reviewer: @lucydot, @lindsayad
Archive: 10.5281/zenodo.1491713
Status badge code:
HTML: <a href="http://joss.theoj.org/papers/43f64b8a0ef42408c72acead37717ec6"><img src="http://joss.theoj.org/papers/43f64b8a0ef42408c72acead37717ec6/status.svg"></a>
Markdown: [](http://joss.theoj.org/papers/43f64b8a0ef42408c72acead37717ec6)
Reviewers and authors:
Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)
@lucydot & @lindsayad, please carry out your review in this issue by updating the checklist below. If you cannot edit the checklist please:
The reviewer guidelines are available here: https://joss.theoj.org/about#reviewer_guidelines. Any questions/concerns please let @katyhuff know.
โจ Please try and complete your review in the next two weeks โจ
paper.md
file include a list of authors with their affiliations?paper.md
file include a list of authors with their affiliations?Hello human, I'm @whedon, a robot that can help you with some common editorial tasks. @lucydot, it looks like you're currently assigned as the reviewer for this paper :tada:.
:star: Important :star:
If you haven't already, you should seriously consider unsubscribing from GitHub notifications for this (https://github.com/openjournals/joss-reviews) repository. As a reviewer, you're probably currently watching this repository which means for GitHub's default behaviour you will receive notifications (emails) for all reviews ๐ฟ
To fix this do the following two things:
For a list of things I can do to help you, just type:
@whedon commands
Attempting PDF compilation. Reticulating splines etc...
In general, shenfun is packaged and documented very nicely. I need a few more days to look at functionality, but here are my comments so far:
python -m pytest
and explicitly point to where the Travis builds are.setup_requires
rather than install_requires
in the setup.py
file, so pip ignores it: https://github.com/pytest-dev/pytest-xdist/issues/136. After pip installing cython, pip install shenfun worked fine. I've been very impressed by the documentation, and I've enjoyed my first exposure to a spectral element implementation.
I'm also exploring the Klein-Gordon problem and I'm a little curious about some of the timings. I'm running on my laptop which has four real cpus, capable of hyperthreading. Results:
(spectralDNS) lindad@localhost:~/scratch$ time mpiexec -np 1 python ./klein-gordon.py
real 0m15.007s
user 0m14.711s
sys 0m0.273s
(spectralDNS) lindad@localhost:~/scratch$ time mpiexec -np 2 python ./klein-gordon.py
real 0m11.127s
user 0m19.624s
sys 0m0.520s
(spectralDNS) lindad@localhost:~/scratch$ time mpiexec -np 4 python ./klein-gordon.py
real 0m9.876s
user 0m33.977s
sys 0m1.127s
(spectralDNS) lindad@localhost:~/scratch$ time mpiexec -np 8 python ./klein-gordon.py
real 0m11.531s
user 1m12.890s
sys 0m5.291s
So I do see a speed-up while increasing my real cpu count, but a stagnation or performance decline when using hyperthreading. The latter doesn't necessarily mean too much to me for reasons like this. However, I guess I'm a little curious about the relatively small speed-up from 1 to 4 cores. Is there a lot of serial computation or communication? I'm also curious how @lucydot did her timings? I also understand that scaling studies are fraught with peril, especially for novice users of software.
Sorry, much better to use the time
module as it's already used in the solver script. Limiting output to rank 0:
(spectralDNS) lindad@localhost:~/scratch$ mpiexec -np 8 python ./klein-gordon.py
Time 8.530541181564331
(spectralDNS) lindad@localhost:~/scratch$ mpiexec -np 4 python ./klein-gordon.py
Time 7.4511120319366455
(spectralDNS) lindad@localhost:~/scratch$ mpiexec -np 2 python ./klein-gordon.py
Time 8.857075214385986
(spectralDNS) lindad@localhost:~/scratch$ mpiexec -np 1 python ./klein-gordon.py
Time 14.115585088729858
So very strong performance gain moving from 1 to 2 procs, but seemingly diminishing returns moving from 2 to 4. With four procs, there should still be roughly 64**3 / 4 = 65536
degrees of freedom per process, so it seems like still plenty of work for all. Any comment on this? It's not something I need to dwell on, but curious.
@lucydot Thanks a lot for very good and constructive feedback! I'll start by answering some of your questions/comments below:
- I really like the paper published on RawGit (Unfortunately I noticed that RawGit is shutting down next year, so you'll have to move it across to Github Pages or similar at some point).
Thanks for the heads up, I did not know that. I'll look into other options.
- It might be useful to state in the readme that shenfun is a Python 3 package (compatible with Python 2?)
Yes, I think you're right. A short section with dependencies could be in the readme. I'll add it.
- I think you could move a very short summary of Shenfun's unique selling points (global shape functions which lead to more accurate approximations than FEniCS / ability to run DNS on supercomputers using an accessible high-level language) to the readme and/or landing page of read the docs.
Very good idea:-)
- You could give instructions for running Pytest locally python -m pytest and explicitly point to where the Travis builds are.
- You could include a "how to cite" section in your documentation (Zenodo DOI until the JOSS paper is published), including a bibtex entry.
- There are not clear guidelines for people who might want to contribute to the software.
All good and valid points. I'll add these at appropriate locations to the documentation.
@lindsayad Thank you for the fast and nice feedback:-)
Some comments on performance. Spectral methods make use of global basis functions and the communication load is therefore very high (MPI Alltoall), much larger than for example for a finite element method that only needs to communicate data on the interface between distributed meshes. For this reason it is usually necessary with a good computer (high performance supercomputer preferably:-)) with fast interconnect between CPUs to see good speedup. Most laptops do not have good enough interconnect speeds to achieve good scaling, but a Cray XC does:-).
For a laptop with 4 cores (like my own 3 year old MacBook pro) you should be able to get speedup all the way up to 4, though, even if not perfect. However, you should use keyword slab=True
when creating the TensorProductSpace
T = TensorProductSpace(comm, (K0, K1, K2), slab=True)
This has to do with how the data are decomposed and distributed between processors. A slab uses one processor group whereas with slab=False
you will use as many groups as possible, here 2. The advantage with using 2 groups is that you can use many more processors than only for one group. But if you are only going to use 4 processors, then one group is actually faster. That may explain your results that are using 4 processors. Another reason can be the hardware itself. Because most processors with 4 cores usually come with two sockets, with two cores in each socket. And communication inside the same socket is faster than between sockets. And as such you may get a nice speedup going from one to two, but not as good moving to four. With a supercomputer the communication speeds are much higher than in your laptop and speedup should be more or less perfect up to thousands of processors. In the end this all depends on the hardware.
Hyperthreading is a different story. You need to activate it using, e.g.,
T = TensorProductSpace(comm, (K0, K1, K2), slab=True, **{'threads': 2})
This will use the FFTW library with hyperthreading (OpenMP). Again, speedup is very hardware dependent; I have seen speedup on some computers, but nothing on others. I'm not an expert though.
@lucydot Regarding installation, what was the problem with Cython? My understanding is that if cython is in setup_requires
, then pip will install it if it is missing? And cython is only required for building the code, not for running shenfun, which is why I did not include it under install_requires
.
@lucydot Regarding functionality please see my response to @lindsayad. For scaling tests please use slab=True
Thanks for the comprehensive reply :-) Iโm satisfied with it.
Regarding the installation, I ran into the same issue as @lucydot. Cython was not installed automatically by shenfun when pip installing the latter. After manually installing cython first (as noted in your documentation) shenfun obviously installed fine.
On Nov 12, 2018, at 3:45 AM, Mikael Mortensen notifications@github.com wrote:
@lucydot Regarding functionality please see my response to @lindsayad. For scaling tests please use slab=True
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Ok. Should be fixed by adding cython to install_requires
, right?
Hi @mikaem, thank you for your post re: scaling up. @lindsayad I was running the calculations on HPC which explains why I got a better scale-up. Yes, moving to install_requires
should fix it.
Hi @mikaem I've took a look at your updated documentation, and happy to sign off on my review.
cc @katyhuff
Great, thanks a lot @lucydot :-)
Amazing that the system "std" C++ headers for OS X Sierra are not standard compliant (real and imaginary component setters are done through references instead of through void methods that take an argument (as the standard dictates)). But after using a local standard library install, I get all tests to pass with the exception of the test_curl
test for long double complex, e.g. I get a KeyError
for 'G'
. I don't understand this failure...my intuition suggests that the error is probably not caused by Shenfun. Anyway with 4727 passes, I'm pretty much ready to sign off on this myself. Tagging @katyhuff
Very nice everyone! @lindsayad @lucydot thanks a lot for your quick and thorough reviews. I'll take a quick look over in the morning and we'll finalize the acceptance.
@mikaem thank you for a strong submission and for engaging actively in the review process!
Could you please make an archive of the reviewed software in Zenodo/figshare/other service and update this thread with the DOI of the archive? I can then move forward with accepting the submission! (You may want to be sure to double check the paper, all minor details, etc. before creating the archive).
Thank you again, @lucydot and @lindsayad for the reviews! I know it's a lot of work, and I appreciate that you both conducted very prompt and thorough reviews.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
Thanks a lot @katyhuff, @lucydot and @lindsayad :-)
Is it too late to make any changes to the pdf? I noticed that changes to the documentation based on the very good suggestions of the esteemed reviewers had not been migrated to the pdf:-)
Thanks, I enjoyed reviewing!
On Nov 19, 2018, at 6:39 AM, whedon notifications@github.com wrote:
๐ Check article proof ๐ ๐
โ
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
@mikaem it is not at all too late to make changes to the pdf. Quite the opposite -- the ideal time is now. So, go ahead and double check that paper, review any lingering details in your code/readme/etc., and then make an archive of the reviewed software in Zenodo/figshare/other service. When you update this thread with the DOI of the archive, I'll move forward with the submission. Until then, this is your moment for final touchups!
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
@katyhuff is it also ok to bump the version number before a final submit? I used version 1.2.0 in the original submission, but there has been modifications.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
Indeed @mikaem, if a new release is desired, to reflect review-related changes to the paper and code, many authors do issue a new (minor or patch) release version at this stage, before making the DOI.
Thanks @katyhuff, we should then be able to continue
DOI that resolves to latest version:
https://doi.org/10.5281/zenodo.1237749
DOI of latest version now:
https://doi.org/10.5281/zenodo.1491713
@mikaem Okay, we can only use one DOI, and it should point to a single, exact, reviewed version, so I'm going to use 10.5281/zenodo.1491713 .
@whedon set 10.5281/zenodo.1491713 as archive
OK. 10.5281/zenodo.1491713 is the archive.
@whedon generate pdf
Attempting PDF compilation. Reticulating splines etc...
Congratulations @mikaem , your paper is ready to be accepted!
@arfon We're ready to accept this submission, so it's over to you!
@whedon accept
Attempting dry run of processing paper acceptance...
Check final proof :point_right: https://github.com/openjournals/joss-papers/pull/69
If the paper PDF and Crossref deposit XML look good in https://github.com/openjournals/joss-papers/pull/69, then you can now move forward with accepting the submission by compiling again with the flag deposit=true
e.g.
@whedon accept deposit=true
@whedon accept deposit=true
Doing it live! Attempting automated processing of paper acceptance...
๐จ๐จ๐จ THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! ๐จ๐จ๐จ
Here's what you must now do:
Party like you just published a paper! ๐๐๐ฆ๐๐ป๐ค
Any issues? notify your editorial technical team...
@lucydot, @lindsayad - many thanks for your reviews here and to @katyhuff for editing this submission โจ
@mikaem - your paper is now accepted into JOSS :zap::rocket::boom:
:tada::tada::tada: Congratulations on your paper acceptance! :tada::tada::tada:
If you would like to include a link to your paper from your README use the following code snippets:
Markdown:
[](https://doi.org/10.21105/joss.01071)
HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.01071">
<img src="http://joss.theoj.org/papers/10.21105/joss.01071/status.svg" alt="DOI badge" >
</a>
reStructuredText:
.. image:: http://joss.theoj.org/papers/10.21105/joss.01071/status.svg
:target: https://doi.org/10.21105/joss.01071
This is how it will look in your documentation:
We need your help!
Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:
Most helpful comment
@lucydot, @lindsayad - many thanks for your reviews here and to @katyhuff for editing this submission โจ
@mikaem - your paper is now accepted into JOSS :zap::rocket::boom: