Keras: TypeError: can't pickle _thread.RLock objects When trying to save keras model

Created on 17 Jun 2019  路  6Comments  路  Source: keras-team/keras

Please make sure that this is a Bug or a Feature Request and provide all applicable information asked by the template.
If your issue is an implementation question, please ask your question on StackOverflow or on the Keras Slack channel instead of opening a GitHub issue.

System information

  • Have I written custom code (as opposed to using example directory): Yes
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Windows 10
  • TensorFlow backend (yes / no): yes
  • TensorFlow version: 1.13.1
  • Keras version: 2.2.4
  • Python version: 3.7 (anaconda)
  • CUDA/cuDNN version: CUDA 10.0 cuDNN 7.6 (conda)
  • GPU model and memory: GTX1050 (surface pro 2), 2GB memory

Describe the current behavior
I receive the error: TypeError: can't pickle _thread.RLock objects when trying to save my keras model. I believe it has to do with the lambda layers I'm using, but I'm not sure which ones to fix.

So far (seen in code), I've tried following answer 3 of this question: Checkpointing keras model: TypeError: can't pickle _thread.lock objects

I did this by extracting the arguments out of the lambdas so they don't cause problems. This did not seem to fix the error, though.

Describe the expected behavior
I believe I should be able to save the model successfully.

Code to reproduce the issue
Here's the code for the model I'm trying to save

import tensorflow as tf
from keras.layers import Lambda, Add, Multiply, Conv2D, Input
import keras.backend as K
import functools
import keras
from keras.models import Model

def resBlock_Keras(x, channels=64, kernel_size=[3,3], scale=1):
    tmp = Conv2D(channels, kernel_size, activation='relu', padding='same')(x)
    tmp = Conv2D(channels, kernel_size, padding='same')(tmp)
    multiply_by_scale = functools.partial(
        K.tf.multiply,
        K.constant(scale)
    )
    add_input = functools.partial(
        K.tf.add,
        x
    )
    tmp = Lambda(lambda q: multiply_by_scale(q))(tmp)
    return Lambda(lambda q: add_input(q))(tmp)

def PS_Keras(X, r, color=False):
    if color:
        split = functools.partial(
            K.tf.split,
            num_or_size_splits=3,
            axis=3
        )
        concat = functools.partial(
            K.tf.concat,
            axis=3
        )
        Xc = Lambda(lambda x: split(x))(X)
        shifts = [_phase_shift_keras(x, r) for x in Xc]
        X = Lambda(lambda q: concat(q))(shifts)
    else:
        X = _phase_shift_keras(X, r)
    return X

def _phase_shift_keras(I, r):
    bsize, a, b, c = I.get_shape().as_list()
    bsize = K.shape(I)[0] # Handling Dimension(None) type for undefined batch dim
    X = K.reshape(I, [bsize, a, b, c//(r*r),r, r]) # bsize, a, b, c/(r*r), r, r
    X = K.permute_dimensions(X, (0, 1, 2, 5, 4, 3))  # bsize, a, b, r, r, c/(r*r)
    #Keras backend does not support tf.split, so in future versions this could be nicer
    X = [X[:,i,:,:,:,:] for i in range(a)] # a, [bsize, b, r, r, c/(r*r)
    X = K.concatenate(X, 2)  # bsize, b, a*r, r, c/(r*r)
    X = [X[:,i,:,:,:] for i in range(b)] # b, [bsize, r, r, c/(r*r)
    X = K.concatenate(X, 2)  # bsize, a*r, b*r, c/(r*r)
    return X

def upsample_keras(x,scale=2,features=64,activation='relu'):
    assert scale in [2,3,4]
    x = Conv2D(features,[3,3],activation=activation,padding='same')(x)
    if scale == 2:
        ps_features = 3*(scale**2)
        x = Conv2D(ps_features,[3,3],activation=activation,padding='same')(x)
        ps = functools.partial(
            PS_Keras,
            r=2,
            color=True
        )
        x = Lambda(lambda q: ps(q))(x)
    elif scale == 3:
        ps_features =3*(scale**2)
        x = Conv2D(ps_features,[3,3],activation=activation,padding='same')(x)
        ps = functools.partial(
            PS_Keras,
            r=3,
            color=True
        )
        x = Lambda(lambda q: ps(q))(x)
    elif scale == 4:
        ps_features = 3*(2**2)
        ps = functools.partial(
            PS_Keras,
            r=2,
            color=True
        )
        for i in range(2):
            x = Conv2D(ps_features,[3,3],activation=activation,padding='same')(x)
            x = Lambda(lambda q: ps(q))(x)
    return x

def subtract_mean(x):
    return tf.subtract(x,K.constant(127.0))

def build_model(img_size=32,num_layers=32,feature_size=64,scale=2,output_channels=3):
    MEAN_PIXEL = 127.0
    MIN_PIXEL = 0.0
    MAX_PIXEL = 255.0

    scaling_factor = 0.1
    img_size = img_size
    scale=scale
    input = Input(shape=(img_size,img_size,output_channels))
    curr_layer = Lambda(lambda x: subtract_mean(x))(input)
    first_conv = Conv2D(feature_size, [3,3], padding='same')(curr_layer)
    curr_layer = first_conv
    for i in range(num_layers):
        curr_layer = resBlock_Keras(curr_layer, feature_size, scale=scaling_factor)
    curr_layer = Conv2D(feature_size, [3,3],padding='same')(curr_layer)
    curr_layer = Add()([curr_layer, first_conv])

    clip_by_value = functools.partial(
    K.tf.clip_by_value,
    clip_value_min=K.constant(MIN_PIXEL),
    clip_value_max=K.constant(MAX_PIXEL))

    upsample = functools.partial(
        upsample_keras,
        scale = scale,
        features = feature_size,
        activation=None
    )

    curr_layer = Lambda(lambda x: upsample(x))(curr_layer)
    output = Lambda(lambda x: clip_by_value(x))(curr_layer)

    model = Model(inputs=input, outputs=output)
    model.compile(loss='mae',optimizer='adam')
    model.summary()
    model.save("model.h5")

build_model()

Other info / logs
Here are the logs for my error

Traceback (most recent call last):
  File "tmp.py", line 125, in <module>
    build_model()
  File "tmp.py", line 123, in build_model
    model.save("model.h5")
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\site-packages\keras\engine\network.py", line 1090, in save
    save_model(self, filepath, overwrite, include_optimizer)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\site-packages\keras\engine\saving.py", line 382, in save_model
    _serialize_model(model, f, include_optimizer)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\site-packages\keras\engine\saving.py", line 83, in _serialize_model
    model_config['config'] = model.get_config()
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\site-packages\keras\engine\network.py", line 931, in get_config
    return copy.deepcopy(config)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 215, in _deepcopy_list
    append(deepcopy(a, memo))
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in _deepcopy_tuple
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 220, in <listcomp>
    y = [deepcopy(a, memo) for a in x]
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 180, in deepcopy
    y = _reconstruct(x, memo, *rv)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 280, in _reconstruct
    state = deepcopy(state, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 150, in deepcopy
    y = copier(x, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 240, in _deepcopy_dict
    y[deepcopy(key, memo)] = deepcopy(value, memo)
  File "C:\Users\pwatm\Anaconda3\envs\gpu\lib\copy.py", line 169, in deepcopy
    rv = reductor(4)
TypeError: can't pickle _thread.RLock objects
tensorflow buperformance

All 6 comments

I am facing a similar issue. Can someone please advise for my case described here: https://stackoverflow.com/questions/57233539/typeerror-cant-pickle-thread-rlock-objects

Hey, @KashyapCKotak, let me know if you get a response. I've had this same question on SO and never got any responses on it at all: https://stackoverflow.com/questions/56572586/typeerror-cant-pickle-thread-rlock-objects-when-trying-to-save-keras-model

I might try and replace the lambda layers with a custom one, which I was trying to avoid earlier. Curious that they include lambda layers if you can't save them 馃憖

Sadly after no answer, I am saving only weights in my Checkpoint and storing the epoch number separately using custom callback. Its really strange for such a famous library having such bug with no answers/fixes/workarounds.

oh NO! I just now ran into this bug and was hoping a fix had come out for this after all this time :( Awful! I'm honestly not sure what to do, now. @KashyapCKotak Saving weights works?

I think you shouldn't use the same variable as input and output of Lambda, or as Lambda parameter.

Was this page helpful?
0 / 5 - 0 ratings