Hello,
What would be involved in adding SPIR-V as a target for Zig? There's a translator from (some subset of?) LLVM to SPIR-V; as I understand it, Zig compiles first to LLVM, so this _seems_ reasonable. There's been some discussion of adding SPIR-V as a Clang target which I'm not sure has materialized further, but I think this would be interesting.
This document goes into detail about the representation of SPIR-V in LLVM:
https://github.com/KhronosGroup/SPIRV-LLVM/blob/khronos/spirv-3.6.1/docs/SPIRVRepresentationInLLVM.rst
adding a custom spir-v target might lay the groundwork for adding other backends to zig.
The most straightforward way for this to work would be if LLVM supported it directly. However it's still open for discussion to do this even if that scenario does not happen.
One potential fruitful direction could be to look at how clang does this:
https://clang.llvm.org/docs/UsersManual.html#opencl-features
It has nvptx64 and amdgcn targets, and can emit LLVM bitcode for them, and the LLVM to SPIR-V translator is also based on libllvm I think, so probably somebody who knows more than me could figure it out. Seems complicated though.
Sorry, this article appears to have some references to the current state of the art:
Some advice from @paniq about implementing such a backend:
<lritter> ...there's https://www.khronos.org/registry/spir-v/specs/1.0/SPIRV.html, but also have a look at SpvBuilder in glslang - i made a copy of that one and expanded it a little
<lritter> also, you will need SPIRV Tools for the validator and other stuff. SPIRV Cross can then convert your SPIR-V to GLSL, which is also great for seeing if your stuff produces the code you have in mind
Hi @andrewrk -- I think LLVM is well on its way to supporting this, but ISTM that it might be necessary or useful to use MLIR. I'm currently exploring this on my own and it definitely seems like a cool project! Far from groking it just yet, but it should be possible to target other GPU backends besides SPIR-V/Vulkan this way, too. IREE is taking this approach to compile from TensorFlow through LLVM.
How would this work with defining images, buffers, input/output variables and all the binding slots and stuff? Or would this just be for the OpenCL use case? (Or maybe all of this can be solved 'easily'?)
I see they want to compile C++ to SPIR-V 馃う鈥嶁檪 RUN AWAY! 馃槺
I think the focus here should be on generating MLIR / SPIR-V. Tooling and other stuff (like whether / how to use OpenCL types, how to launch and schedule kernels, native GPU types for components, and all that) isn't out of the question, but can hopefully be done as library layers on top of the core functionality. (Counterargument: if the heterogeneous system is all modeled in one IR, LLVM _can optimize across the CPU/GPU boundary_, as IREE does.)
Otherwise it runs the risk of being too much work to implement or to become useful in a reasonable amount of time.
If adding a SPIR-V backend requires too many {GPU,SPIR-V}-specific features into the core language / compiler, then my proposal is to instead add a feature something like a "Zig @dialect" that can be imported by the build tooling as a library (implemented as MLIR dialects?)
How would this work with defining images, buffers, input/output variables and all the binding slots and stuff?
With inline assembly, or with target-specific builtin functions.
https://github.com/EmbarkStudios/rust-gpu does this (its 0.1 release just came out).
Most helpful comment
https://github.com/EmbarkStudios/rust-gpu does this (its 0.1 release just came out).