Models: Tensorflow Multi-GPU Inference

Created on 23 Oct 2018  路  5Comments  路  Source: tensorflow/models

System information

  • **What is the top-level directory of the model you are using: /home/dell/models/
  • **Have I written custom code (as opposed to using a stock example script provided in TensorFlow): No
  • **OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 16.04.5 LTS (GNU/Linux 4.4.0-137-generic x86_64)
  • **TensorFlow installed from (source or binary): via container image nvcr.io/nvidia/tensorflow:18.09-py3
  • **TensorFlow version (use command below): 1.10
  • **Bazel version (if compiling from source): n/a
  • **CUDA/cuDNN version: 9.0/7.3
  • **GPU model and memory: 3x Tesla P4 - 7GB
  • **Exact command to reproduce: n/a

Describe the problem

I have a server with 3 GPUs and I need to run inferences using all GPUs in my system to make full use of them, is there a code sample for it?


All 5 comments

Hi @vilmara , which model do you want to use for inferences? Do you need to train it by yourself? Could you provide more detailed info about it. Thanks.

Hi @yhliang2018, it could be any pre-trained official model (preferably resnet50). I don't need to train it by myself, I just need a sample code that shows how to run inferences using efficiently all the GPUs in my system. Thanks

@vilmara TF hub would be a good start point if you want to use pre-trained models. It provides several tutorials to start with: https://www.tensorflow.org/hub/
As this is not an issue, I will close it for now.

@vilmara Were you able to find any code to run inferences on all GPUs efficiently in your system?

hi @SAswinGiridhar, have you explored Nvidia TensorRT Inference Server (TRTIS) ? https://github.com/NVIDIA/tensorrt-inference-server

Was this page helpful?
0 / 5 - 0 ratings