Tfjs: How can i run a POSENET model with Tensorflow without using the browser?

Created on 27 Nov 2018  路  11Comments  路  Source: tensorflow/tfjs

To get help from the community, check out our Google group.

TensorFlow.js version

Browser version

Describe the problem or feature request

Code to reproduce the bug / link to feature request

awaiting response support

Most helpful comment

note, if you're using the posenet implementation in npm @tensorflow-models/posenet package, this isn't properly ported to node yet, so you'll need to hack XHR in to the environment by npm install xhr2 and then add global.XMLHttpRequest = require('xhr2') to the start of your nodejs script, and be aware that every time you run your node script, you'll need to be online, because it downloads the posenet models from google's servers every time - they aren't embedded in the npm package.

You can have a look at one of my (not very polished) projects which uses posenet in nodejs to extract features (cropped images of hands by default) from videos: https://github.com/Bluebie/nzsl-training-data-generator/blob/master/pose-machine.js it should be straight forward to modify this to dump out the poses in json.

All 11 comments

hi @swatinair123

You should be able to use the POSENET model without using the browser. But you have to look into the current code to replace the web camera input with your image/video input, and replace the usage of browser specific entities, such as dome elements.

Yes,in my current code i have made changes for the input videos from local desktop. But my main aim is to write the keypoints to the .json file . when working with browser this will not let me write the points .

hi, you should be able to create or write .json file if your process is running in node env. You probably want to try @tensorflow/tfjs-node.

note, if you're using the posenet implementation in npm @tensorflow-models/posenet package, this isn't properly ported to node yet, so you'll need to hack XHR in to the environment by npm install xhr2 and then add global.XMLHttpRequest = require('xhr2') to the start of your nodejs script, and be aware that every time you run your node script, you'll need to be online, because it downloads the posenet models from google's servers every time - they aren't embedded in the npm package.

You can have a look at one of my (not very polished) projects which uses posenet in nodejs to extract features (cropped images of hands by default) from videos: https://github.com/Bluebie/nzsl-training-data-generator/blob/master/pose-machine.js it should be straight forward to modify this to dump out the poses in json.

@swatinair123 did @kangyizhang answer your question ?

See this issue for an existing discussion and some work that has been done to convert the posenet model to python.

Automatically closing due to lack of recent activity. Please update the issue when new information becomes available, and we will reopen the issue. Thanks!

@Bluebie , is it possible to install npm install xhr2 and defining the global variable , in the posenet demo_utils.js file . I still seem to have the problem . Basically i want to export the following to a json file.

export function drawKeypoints(keypoints, minConfidence, ctx, scale = 1) {
  for (let i = 0; i < keypoints.length; i++) {
    const keypoint = keypoints[i];

    if (keypoint.score < minConfidence) {
      continue;
    }

    const {y, x} = keypoint.position;
    drawPoint(ctx, y * scale, x * scale, 3, color);
  }
  //console.log(keypoints);
 let test = JSON.stringify(keypoints);
 fs.write File('extract.json', test, (err) => {  
  if (err) throw err;
  console.log('Data written to file');
});

Is there another way out ?

I handle USB camera input via the opencv4nodejs package. Install the package and include it:

const cv = require("opencv4nodejs");

Initialize the camera with the desired resolution (you may need to change the input channel if your system has more than one camera):

let camera = new cv.VideoCapture(0);
camera.set(cv.CAP_PROP_FRAME_WIDTH, 640);
camera.set(cv.CAP_PROP_FRAME_HEIGHT, 480);

Now capture a frame, convert the image to RGB format and finally convert to a tensor which you can feed into your network:

camera.readAsync((err, mat) => {

    if(err) { console.log('Capture error:', err); return; }
    if(mat.empty) { console.log('Empty frame captured'); return; };

    // convert to RGB
    mat = mat.cvtColor(cv.COLOR_BGR2RGB);

    // convert to 3d tensor
    let buffer = new Uint8Array(mat.getData().buffer);
    let tFrame = tf.tensor3d(buffer, [640, 480, 3]);

    // net is your poseNet instance
    let pose = net.estimateSinglePose(tFrame);
    console.log(pose);
});

Maybe this will help someone.

I handle USB camera input via the opencv4nodejs package. Install the package and include it:

const cv = require("opencv4nodejs");

Initialize the camera with the desired resolution (you may need to change the input channel if your system has more than one camera):

let camera = new cv.VideoCapture(0);
camera.set(cv.CAP_PROP_FRAME_WIDTH, 640);
camera.set(cv.CAP_PROP_FRAME_HEIGHT, 480);

Now capture a frame, convert the image to RGB format and finally convert to a tensor which you can feed into your network:

camera.readAsync((err, mat) => {

    if(err) { console.log('Capture error:', err); return; }
    if(mat.empty) { console.log('Empty frame captured'); return; };

    // convert to RGB
    mat = mat.cvtColor(cv.COLOR_BGR2RGB);

    // convert to 3d tensor
    let buffer = new Uint8Array(mat.getData().buffer);
    let tFrame = tf.tensor3d(buffer, [640, 480, 3]);

    // net is your poseNet instance
    let pose = net.estimateSinglePose(tFrame);
    console.log(pose);
});

Maybe this will help someone.
Does that works good for you?
I keep getting low score for the detections, lower then 0.1

Yes,in my current code i have made changes for the input videos from local desktop. But my main aim is to write the keypoints to the .json file . when working with browser this will not let me write the points .

Yes,in my current code i have made changes for the input videos from local desktop. But my main aim is to write the keypoints to the .json file . when working with browser this will not let me write the points .

Could you please guide me where to update the video path?

Was this page helpful?
0 / 5 - 0 ratings

Related issues

rumschuettel picture rumschuettel  路  3Comments

JasonShin picture JasonShin  路  4Comments

weiji14 picture weiji14  路  3Comments

chrisdonahue picture chrisdonahue  路  3Comments

pranayaryal picture pranayaryal  路  4Comments