The origial textsum model training code uses '/cpu:0' and I had no problem running that in bazel with cuda support. But I guess that line actually specifies usage of CPU, am I correct? Or, does bazel automatically transform CPU settings to GPU settings? Anyway I tried to change the line
def _Train(model, data_batcher):
with tf.device('/cpu:0'):
to
def _Train(model, data_batcher):
tf.device('/gpu:0').
I also had to chang the code according to #377(https://github.com/tensorflow/models/pull/377/commits/a0de5ca9364f98aa36241c5ea7e891e2f1e1d80b) (adding allow_soft_placement=True) to avoid the potential error
tensorflow.python.framework.errors.InvalidArgumentError: Cannot assign a device to node "Constant" ...
After all these, I ran the toy example and got the following results:
$:~/models-master/traintextsum$ bazel-bin/textsum/seq2seq_attention --mode=train --article_key=article --abstract_key=abstract --data_path=data/data --vocab_path=data/vocab --log_root=textsum/log_root --train_dir=textsum/log_root/train
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so.5 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so.8.0 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so.1 locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so.8.0 locally
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa79faf810>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa527d6590>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa79fafc10>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa79faf810>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa502af090>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa79fafc10>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa527d0e50>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ffa502af090>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
WARNING:tensorflow:<tensorflow.python.ops.rnn_cell.LSTMCell object at 0x7ff9e2e36250>: Using a concatenated state is slower and will soon be deprecated. Use state_is_tuple=True.
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX 970
major: 5 minor: 2 memoryClockRate (GHz) 1.253
pciBusID 0000:01:00.0
Total memory: 3.93GiB
Free memory: 3.73GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:838] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 970, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 33672 get requests, put_count=18328 evicted_count=1000 eviction_rate=0.0545613 and unsatisfied allocation rate=0.488358
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 100 to 110
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2132 get requests, put_count=12139 evicted_count=10000 eviction_rate=0.823791 and unsatisfied allocation rate=0.00140713
running_avg_loss: 1.287174
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 205 get requests, put_count=1217 evicted_count=1000 eviction_rate=0.821693 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2633 get requests, put_count=13643 evicted_count=11000 eviction_rate=0.806274 and unsatisfied allocation rate=0.00075959
running_avg_loss: 0.780048
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1631 get requests, put_count=9644 evicted_count=8000 eviction_rate=0.829531 and unsatisfied allocation rate=0
running_avg_loss: 0.875641
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 39327 get requests, put_count=39759 evicted_count=18000 eviction_rate=0.452728 and unsatisfied allocation rate=0.447047
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 146 to 160
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2074 get requests, put_count=12088 evicted_count=10000 eviction_rate=0.827267 and unsatisfied allocation rate=0
running_avg_loss: 0.815771
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 200 get requests, put_count=3216 evicted_count=3000 eviction_rate=0.932836 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2758 get requests, put_count=15774 evicted_count=13000 eviction_rate=0.824141 and unsatisfied allocation rate=0
running_avg_loss: 1.158582
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 844 get requests, put_count=5861 evicted_count=5000 eviction_rate=0.853097 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 3435 get requests, put_count=18452 evicted_count=15000 eviction_rate=0.81292 and unsatisfied allocation rate=0
running_avg_loss: 1.122834
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1386 get requests, put_count=8405 evicted_count=7000 eviction_rate=0.832838 and unsatisfied allocation rate=0
running_avg_loss: 0.753792
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 39114 get requests, put_count=38619 evicted_count=17000 eviction_rate=0.440198 and unsatisfied allocation rate=0.447768
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 212 to 233
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2049 get requests, put_count=12070 evicted_count=10000 eviction_rate=0.8285 and unsatisfied allocation rate=0
running_avg_loss: 0.946446
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 135 get requests, put_count=2158 evicted_count=2000 eviction_rate=0.926784 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2755 get requests, put_count=14778 evicted_count=12000 eviction_rate=0.812018 and unsatisfied allocation rate=0
running_avg_loss: 0.879314
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 883 get requests, put_count=5908 evicted_count=5000 eviction_rate=0.84631 and unsatisfied allocation rate=0
running_avg_loss: 0.925954
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 33504 get requests, put_count=34606 evicted_count=15000 eviction_rate=0.433451 and unsatisfied allocation rate=0.415562
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 281 to 309
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1905 get requests, put_count=11933 evicted_count=10000 eviction_rate=0.838012 and unsatisfied allocation rate=0
running_avg_loss: 0.908271
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1166 get requests, put_count=7196 evicted_count=6000 eviction_rate=0.833797 and unsatisfied allocation rate=0
running_avg_loss: 0.887021
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 37880 get requests, put_count=37323 evicted_count=16000 eviction_rate=0.42869 and unsatisfied allocation rate=0.437883
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 339 to 372
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1962 get requests, put_count=11995 evicted_count=10000 eviction_rate=0.833681 and unsatisfied allocation rate=0
running_avg_loss: 0.988974
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 408 get requests, put_count=3445 evicted_count=3000 eviction_rate=0.870827 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2792 get requests, put_count=15829 evicted_count=13000 eviction_rate=0.821277 and unsatisfied allocation rate=0
running_avg_loss: 0.870846
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1240 get requests, put_count=8280 evicted_count=7000 eviction_rate=0.845411 and unsatisfied allocation rate=0
running_avg_loss: 0.910514
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 0 get requests, put_count=1044 evicted_count=1000 eviction_rate=0.957854 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2353 get requests, put_count=13397 evicted_count=11000 eviction_rate=0.821079 and unsatisfied allocation rate=0
running_avg_loss: 0.834017
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 522 get requests, put_count=3571 evicted_count=3000 eviction_rate=0.840101 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2961 get requests, put_count=16010 evicted_count=13000 eviction_rate=0.811993 and unsatisfied allocation rate=0
running_avg_loss: 0.749524
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1463 get requests, put_count=8517 evicted_count=7000 eviction_rate=0.821886 and unsatisfied allocation rate=0
running_avg_loss: 0.926840
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 36044 get requests, put_count=38209 evicted_count=17000 eviction_rate=0.444921 and unsatisfied allocation rate=0.413078
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 596 to 655
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2132 get requests, put_count=12191 evicted_count=10000 eviction_rate=0.820277 and unsatisfied allocation rate=0
running_avg_loss: 0.683415
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1069 get requests, put_count=6134 evicted_count=5000 eviction_rate=0.815129 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 3654 get requests, put_count=18719 evicted_count=15000 eviction_rate=0.801325 and unsatisfied allocation rate=0
running_avg_loss: 0.844601
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1873 get requests, put_count=10945 evicted_count=9000 eviction_rate=0.822293 and unsatisfied allocation rate=0
running_avg_loss: 1.029763
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 123 get requests, put_count=2202 evicted_count=2000 eviction_rate=0.908265 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2718 get requests, put_count=14797 evicted_count=12000 eviction_rate=0.810975 and unsatisfied allocation rate=0
running_avg_loss: 1.043226
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1046 get requests, put_count=6133 evicted_count=5000 eviction_rate=0.815262 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 3607 get requests, put_count=18694 evicted_count=15000 eviction_rate=0.802396 and unsatisfied allocation rate=0
running_avg_loss: 0.639421
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1913 get requests, put_count=11008 evicted_count=9000 eviction_rate=0.817587 and unsatisfied allocation rate=0
running_avg_loss: 0.756429
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 386 get requests, put_count=2491 evicted_count=2000 eviction_rate=0.80289 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2927 get requests, put_count=15032 evicted_count=12000 eviction_rate=0.798297 and unsatisfied allocation rate=0
running_avg_loss: 0.710093
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1137 get requests, put_count=6252 evicted_count=5000 eviction_rate=0.799744 and unsatisfied allocation rate=0
running_avg_loss: 0.970053
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 120 get requests, put_count=1247 evicted_count=1000 eviction_rate=0.801925 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2706 get requests, put_count=13833 evicted_count=11000 eviction_rate=0.7952 and unsatisfied allocation rate=0
running_avg_loss: 0.925311
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1070 get requests, put_count=6210 evicted_count=5000 eviction_rate=0.805153 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 3606 get requests, put_count=18746 evicted_count=15000 eviction_rate=0.800171 and unsatisfied allocation rate=0
running_avg_loss: 0.941991
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2039 get requests, put_count=10193 evicted_count=8000 eviction_rate=0.784852 and unsatisfied allocation rate=0
running_avg_loss: 0.921148
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 499 get requests, put_count=2668 evicted_count=2000 eviction_rate=0.749625 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2956 get requests, put_count=15125 evicted_count=12000 eviction_rate=0.793388 and unsatisfied allocation rate=0
running_avg_loss: 0.631239
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1584 get requests, put_count=7770 evicted_count=6000 eviction_rate=0.772201 and unsatisfied allocation rate=0
running_avg_loss: 0.909417
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 267 get requests, put_count=1471 evicted_count=1000 eviction_rate=0.67981 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2714 get requests, put_count=13918 evicted_count=11000 eviction_rate=0.790343 and unsatisfied allocation rate=0
running_avg_loss: 0.440527
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1548 get requests, put_count=7773 evicted_count=6000 eviction_rate=0.771903 and unsatisfied allocation rate=0
running_avg_loss: 1.254257
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 356 get requests, put_count=1603 evicted_count=1000 eviction_rate=0.62383 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2835 get requests, put_count=14082 evicted_count=11000 eviction_rate=0.781139 and unsatisfied allocation rate=0
running_avg_loss: 0.827865
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1669 get requests, put_count=7941 evicted_count=6000 eviction_rate=0.755572 and unsatisfied allocation rate=0
running_avg_loss: 0.962783
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 313 get requests, put_count=1612 evicted_count=1000 eviction_rate=0.620347 and unsatisfied allocation rate=0
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2805 get requests, put_count=14104 evicted_count=11000 eviction_rate=0.779921 and unsatisfied allocation rate=0
running_avg_loss: 0.534604
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1740 get requests, put_count=9069 evicted_count=7000 eviction_rate=0.77186 and unsatisfied allocation rate=0
running_avg_loss: 0.677655
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1268 get requests, put_count=6630 evicted_count=5000 eviction_rate=0.754148 and unsatisfied allocation rate=0
running_avg_loss: 0.544429
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 868 get requests, put_count=4266 evicted_count=3000 eviction_rate=0.703235 and unsatisfied allocation rate=0
running_avg_loss: 0.601161
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 39262 get requests, put_count=39320 evicted_count=13000 eviction_rate=0.330621 and unsatisfied allocation rate=0.339769
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 4385 to 4823
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2663 get requests, put_count=13101 evicted_count=10000 eviction_rate=0.763301 and unsatisfied allocation rate=0
running_avg_loss: 1.308416
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1904 get requests, put_count=9386 evicted_count=7000 eviction_rate=0.745792 and unsatisfied allocation rate=0
running_avg_loss: 0.964156
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1394 get requests, put_count=6924 evicted_count=5000 eviction_rate=0.722126 and unsatisfied allocation rate=0
running_avg_loss: 0.802375
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 921 get requests, put_count=4504 evicted_count=3000 eviction_rate=0.666075 and unsatisfied allocation rate=0
running_avg_loss: 0.900807
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 421 get requests, put_count=2062 evicted_count=1000 eviction_rate=0.484966 and unsatisfied allocation rate=0
running_avg_loss: 0.637615
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 419 get requests, put_count=2124 evicted_count=1000 eviction_rate=0.47081 and unsatisfied allocation rate=0
running_avg_loss: 1.051068
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 427 get requests, put_count=2203 evicted_count=1000 eviction_rate=0.453926 and unsatisfied allocation rate=0
running_avg_loss: 1.072792
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 696 get requests, put_count=3550 evicted_count=2000 eviction_rate=0.56338 and unsatisfied allocation rate=0
running_avg_loss: 1.055896
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1507 get requests, put_count=7446 evicted_count=5000 eviction_rate=0.671501 and unsatisfied allocation rate=0
running_avg_loss: 0.935947
running_avg_loss: 0.777941
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 498 get requests, put_count=2634 evicted_count=1000 eviction_rate=0.379651 and unsatisfied allocation rate=0
running_avg_loss: 0.778102
running_avg_loss: 0.817212
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 887 get requests, put_count=4262 evicted_count=2000 eviction_rate=0.469263 and unsatisfied allocation rate=0
running_avg_loss: 0.962390
running_avg_loss: 1.149842
running_avg_loss: 1.242884
running_avg_loss: 0.569922
running_avg_loss: 0.465822
running_avg_loss: 0.825633
running_avg_loss: 0.624228
running_avg_loss: 0.997631
running_avg_loss: 0.970832
running_avg_loss: 0.423489
running_avg_loss: 0.919888
running_avg_loss: 0.868830
running_avg_loss: 0.852818
running_avg_loss: 0.847950
running_avg_loss: 0.953032
running_avg_loss: 0.984923
running_avg_loss: 0.791656
running_avg_loss: 0.521412
running_avg_loss: 0.773636
running_avg_loss: 1.136145
running_avg_loss: 0.827604
running_avg_loss: 0.971522
running_avg_loss: 1.014736
running_avg_loss: 1.145882
running_avg_loss: 0.853915
running_avg_loss: 1.230012
running_avg_loss: 0.658319
running_avg_loss: 0.990005
running_avg_loss: 0.903349
Questions:
tf.device line controls placement of operations. See API docsstate_is_tuple=True warning is a result of changing APIs. Until TensorFlow 1.0, APIs can change considerably between releases and that warning means that the code needs to be updated to avoid using APIs recently deprecated. @panyx0718 or @peterjliu : could you look into this?PoolAllocator is a class used by the TensorFlow runtime to manage memory on the GPUs. The log line is meant to be informative (log lines that start with an I are informational while those that start with E indicate an error). See Paul's StackOverflow post for more details on this allocator, but the short story is that once memory usage stabilizes those messages stop appearing.Does the above answer the questions you had?
Automatically closing due to lack of recent activity. Please let us know when further information is available and we will reopen. Thanks!
Most helpful comment
tf.deviceline controls placement of operations. See API docsstate_is_tuple=Truewarning is a result of changing APIs. Until TensorFlow 1.0, APIs can change considerably between releases and that warning means that the code needs to be updated to avoid using APIs recently deprecated. @panyx0718 or @peterjliu : could you look into this?PoolAllocatoris a class used by the TensorFlow runtime to manage memory on the GPUs. The log line is meant to be informative (log lines that start with anIare informational while those that start withEindicate an error). See Paul's StackOverflow post for more details on this allocator, but the short story is that once memory usage stabilizes those messages stop appearing.Does the above answer the questions you had?