Currently, canvas_util.ts has some caching of WebGL context versions with a retrieval function that attempts to load a WebGL context by creating a new canvas DOM element.
Ideally, we have a flag or global override to set the WebGL/GL context object.
For instance, setting WEBGL_FENCE_API_ENABLED to false will result in this exception when running outside of the browser:
(node:89462) UnhandledPromiseRejectionWarning: ReferenceError: document is not defined
at getWebGLRenderingContext (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/canvas_util.ts:63:18)
at getWebGLContext (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/canvas_util.ts:37:30)
at Object.getWebGLContext (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/canvas_util.ts:42:12)
at Object.getWebGLDisjointQueryTimerVersion (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/webgl_util.ts:516:14)
at Object.evaluationFn (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/flags_webgl.ts:104:21)
at Environment.evaluateFlag (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/environment.ts:98:40)
at Environment.get (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/environment.ts:61:33)
at Environment.getNumber (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/environment.ts:67:17)
at GPGPUContext.createFence (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/gpgpu_context.ts:254:13)
at GPGPUContext.createAndWaitForFence (/Users/kreeger/workspace/tfjs-backend-nodegl/node_modules/@tensorflow/tfjs-core/src/backends/webgl/gpgpu_context.ts:232:31)
I tried importing setWebGLContext but the key as a number does not work on contexts. I'll whip up a patch tomorrow for this.
Relatedly, if you do your own WebGL stuff in the same context as Tensorflow.js, you have to restore TFJS's vertex attributes and ELEMENT_ARRAY_BUFFER because TFJS only sets them once at initialization time. Once you do that, it works fine.
@pmer do you know any easy way to set back the TFJS's Vertex Attribs ? I am wondering how involved it might be.
Yes, it is incredibly simple. I was able to get it to work by checking if gpgpu.vertexAttrsAreBound is true, and if it is, then call gpgpu_util.bindVertexProgramAttributeStreams(gl, false, gpgpu.program, gpgpu.vertexBuffer). Right now vertexAttrsAreBound is marked private in TypeScript, so if you are using TypeScript yourself, you have to cast gpgpu to any before using it. The gpgpu object is a property from the MathBackendWebGL class, so you can get it like this: tf.engine().registry.webgl.gpgpu.
This logic works on current TFJS, but since it's not part of the public API, it could break on any future release. Although that's what this GitHub issue is about.
AMAZING !!!! I ll give it a try ASAP :-)
Works like a charm !!! thanks a ton.
Glad to hear it!!
I don't think this issue would be too hard to fix by someone who knew how it could fit in best with the public API that TFJS wants to expose. @sebavan, may I ask if you have any comments or suggestions if anybody wants to take this up?
I cross posted here on the community: https://groups.google.com/a/tensorflow.org/g/tfjs/c/5VAnA7X0BRw/m/7iuomFqQBQAJ
Content is:
Basically, this part helps setting the custom shared context (should double check version and so on):
const mygl = this.canvas.getContext('webgl2');
setWebGLContext(2, mygl as any);
await tf.setBackend('webgl');
This (Hackish one) ensures the restored state after I am done with my own work in the canvas (would be great as a restore helper):
const gpgpu = (tf.backend() as any).gpgpu;
const gl = this.engine._gl;
if (gpgpu.vertexAttrsAreBound) {
this.engine.enableScissor();
gl.useProgram(gpgpu.program);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, gpgpu.indexBuffer);
gpgpu_util.bindVertexProgramAttributeStreams(gl, false, gpgpu.program, gpgpu.vertexBuffer);
}
And this helps me sharing output internal tensor textures with my app (would also be great as a simpler helper):
const output = resize(inputTensor);
const backend = tf.backend() as unknown as tfgl.MathBackendWebGL;
const dataId = (output as any).dataId
const textureData = backend.texData.get(dataId);
const texture = textureData.texture;
The only missing part would be able to share inputs data by having a kind of fromTexture equivalent to fromPixels.
Any thoughts, concerns ?
Nice! That's cool, I think we came to the same solution. :)
The only other thing I did when extracting tensors from TFJS was to (backend as any).decode() them because the data on them was encoded in a TFJS internal format. With .decode() they are left in a "square-ish" texture (the dimensions of which you can get with tex_util.getDenseTexShape()) that I'm able to copy into a texture of my own preferred width/height. Once you're done with the decoded texture, you call backend.disposeData() on it. I couldn't figure out how to use encoded texture data, so that's why I took this route.
For a fromTexture() function, I did something like this:
const myShape = [myHeight, myWidth, myNumChannels];
const myTexture: WebGLTexture = ...;
// Create a temporary TensorInfo that I will free in a little bit.
const tempTensor = backend.makeTensorInfo(texShape, 'float32');
backend.texData.get(tempTensor.dataId).usage = TextureUsage.PIXELS;
(backend as any).uploadToGPU(tempTensor.dataId);
// (Hack) Put my data in the TensorInfo.
const originalTexture: WebGLTexture =
backend.texData.get(tempTensor.dataId).texture;
backend.texData.get(tempTensor.dataId).texture = myTexture;
// Create a new TensorInfo containing my data in an encoded format.
const program = new FromPixelsPackedProgram(outShape);
const res: tf.TensorInfo =
backend.runWebGLProgram(program, [tempTensor], 'float32');
// Free the temporary TensorInfo.
backend.texData.get(tempTensor.dataId).texture = originalTexture;
backend.disposeData(tempTensor.dataId);
// Wrap the encoded data in a tf.Tensor.
const tensor: tf.Tensor = tf.engine().makeTensorFromDataId(
res.dataId,
res.shape,
res.dtype,
backend
);
The logic came mostly from tf.browser.fromPixels().
Once I got these insert texture & extract texture functions working, the speed of running TFJS functions every frame in my app became a non-issue. I was always getting 60 FPS! (And getting 144 FPS on 144 Hz displays.)
Thanks a lot for the nice trick with .decode() as I was currently struggling with an uggly way of solving it :-)
I know what you mean鈥攎y first attempt was also ugly. This .decode() approach was my second attempt. :)
It'd be really great if Tensorflow.js could fix this, since it sounds like it's not that much work. Without something like this it really limits the usefulness for a whole class of applications, which is unfortunate because Tensorflow.js is otherwise amazing.
cc @pyu10055
@pmer, about the texture sharing, how did you access FromPixelsPackedProgram and TextureUsage without creating a custom TF build ? I can not find them exported on the current version ?
Just wondering if I am not missing the obvious :-)
Mainly because I am currently doing it as such:
import { FromPixelsPackedProgram } from '@tensorflow/tfjs-backend-webgl/dist/kernels/FromPixels_utils/from_pixels_packed_gpu'
But I am, a bit worried about upgrade and back compat but if #3937 goes in, it will solve all of it.
I have that exact same line of code, character for character! :) That line works for me without making a custom build. Does it for you?
And I'm worried about upgrades, too. Having texture insertion & deletion is a must-have feature for me (my app will, for its purposes, not work without this), so I am prepared to forego upgrading to any new version of TFJS that would break what we have here. Not a good solution, I know. :) Looking forward to 3937.
I cannot do a custom build IIRC because TFJS is tightly coupled to specific versions of TypeScript, and will not compile with an older or newer version of TypeScript. (Does this mean that TypeScript makes breaking changes regularly?)
But my program uses a different version of TypeScript than what TFJS uses, so to do a custom build I'd have to have two versions of TypeScript at once and do two different build processes, which is too complicated for me.
Ohhh yup the line works for me, but digging into customs makes me scared on update, hence the custom build but I guess I ll stick with it :-)