I love black, and have championed its adoption where I work (ravelin.com).
However, it is currently not a viable save hook in the same way gofmt and rustfmt are: taking into account the startup time of the Python interpreter, it is simply too slow. After every save, I need to be conscious that I need to wait a second or two for black to do its thing and let my editor autoreload.
Would you be open to adding a server/daemon mode, where a persistent instance of Black is running and listening to formatting requests?
(Of course, ideally we would compile Black into a very quickly starting binary, but that is probably quite a difficult task.)
This is a planned feature.
Don't you think it's too complicated for this? Is it possible to make it asynchronous like https://github.com/neomake/neomake ?
@laixintao even with asynchronous calls (in Emacs) I need to make sure I do not type anything within ~2 seconds of hitting save in order to not interfere with the autoreloading process.
I get it. But are you sure that the mainly time cost is by the startup time of the Python interpreter, not by black scanning file and analyze them? If so, S/C can't resolve this.
That's a good point. A little bit of profiling shows that parsing is quite slow, but there is relatively little that can be done about it short of optimizng blib2to3.
@maciejkula you might want to look into the Python Language Server and the pyls-black plugin (disclaimer I am the author 馃槈).
I'm not an Emacs user myself but it looks like you'll just need to install emacs-lsp.
Perhaps the Language Server Protocol would be a good API for a future black server mode.
I looked into this a bit and found the language server protocol to be overly complicated for the simple things we want to achieve with a server mode for black. Right now I'm more leaning towards keeping it as simple as possible:
black with your usual options + --server to start up:format_str call per connection whereOK\n followed by utf8 encoded bytestrem of formatted code or ERROR\n followed by some human readable description of what went wrongIn the end I implemented this on top of HTTP as described here
Most helpful comment
This is a planned feature.