Keras: Split tensor

Created on 24 May 2016  路  16Comments  路  Source: keras-team/keras

I'm doing a lambda layer in which I'd like to split a tensor into two (so the opposite of K.concatenate, essentially) to perform some different operations on the two parts, before concatenating them again. Any thoughts on how to split with the Keras backend?

import keras.backend as K
t = K.ones((12,3))
t1, t2 = ...  # Split somehow?
t = K.concatenate([t1,t2])

Most helpful comment

Slicing should work

import keras.backend as K
t = K.ones((12, 3))
t1 = t[:, :1] + 1
t2 = t[:, 1:] - 1
t3 = K.concatenate([t1, t2])
print(K.eval(t3))

All 16 comments

Slicing should work

import keras.backend as K
t = K.ones((12, 3))
t1 = t[:, :1] + 1
t2 = t[:, 1:] - 1
t3 = K.concatenate([t1, t2])
print(K.eval(t3))

Yes, @joelthchao, that's what I'm currently doing. Thanks. :smile:

However, that requires knowing where to slice. Any way to get around that?

Do you mean split tensor into half without knowing shape?

Yes.

doesn't something like t1 = t[:, :t.shape[-1]//2] + 1 work?

@lemuriandezapada, it seems like it should, but

t[:, :, int(t.shape[-1]/2):])

results in

TypeError: int() argument must be a string, a bytes-like object or a number, not 'TensorVariable'

It's not possible to inference tensor shape without eval in this situation. You need to write a customize layer.

It would be nice to have the method slice from Tensorflow in the backend of Keras.

@joelthchao But when I train such a model, there always throws the error "Exception: Output tensors to a Model must be Keras tensors. Found: Tensor("add_292:0", shape=(?, 10), dtype=float32)"

@lixiaosi33 Since we are using keras.backend to do tensor operation, it produces TF or TH tensor but keras tensor. To build a model, you can use lambda layer to build keras layer:

from keras.layers import Input, Lambda
from keras.models import Model
a = Input(shape=(3,))
def slice(x):
    return x[:, x:1]
b = Lambda(slice)(a)
model = Model(a, b)
model.summary()

Just to add to the post, the return of the function should be return x[:, 0:1] rather than return x[:, x:1], if I am not wrong.

I have another relevant issue of splitting tensor along the very first channel, the batch channel. Just following the code above, one can result in the following code snippet:

from keras.layers import Input, Lambda
from keras.models import Model
a = Input(batch_shape=(10,3))
def slice(x):
    return x[0:4]
b = Lambda(slice)(a)
model = Model(a, b)
model.summary()

Everything goes as expected, we ended up with a model with input of shape (10,3) and output of shape (4,3). However, the model can't even make a prediction:

model.predict(np.zeros((10,3)))

gives

/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in predict(self, x, batch_size, verbose)
   1583         f = self.predict_function
   1584         return self._predict_loop(f, ins,
-> 1585                                   batch_size=batch_size, verbose=verbose)
   1586
   1587     def train_on_batch(self, x, y,

/usr/local/lib/python2.7/dist-packages/keras/engine/training.pyc in _predict_loop(self, f, ins, batch_size, verbose)
   1219
   1220             for i, batch_out in enumerate(batch_outs):
-> 1221                 outs[i][batch_start:batch_end] = batch_out
   1222             if verbose == 1:
   1223                 progbar.update(batch_end)

ValueError: could not broadcast input array from shape (4,3) into shape (10,3)

Any comments is appreciated.

  1. return x[:, x:1] change to return x[:, :1], just a typo
  2. batch size should not change during training/testing, if you insist, I recommend you to use another dimension (e.g. make it (1, 10, 3) to (1, 4, 3)) rather than batch_size.

I have a mistake like this:
ain = Input(shape=(1,N))
x=Dense(12)(ain[:,0:1])
y=Dense(12)(ain[:,1:2])
Then it is impossible to concatenate x,y
Error:
ValueError: A Concatenate layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(None, 0, 32), (None, 1, 32)]

take ain[:,0:1] always gives me shape: (None, 1, 32) but others always give (None, 0, 32), i.e. for all i not equal to 0, ain[:,i:i+1] gives (None, 0, 32).

Does anyone know the reason?
Many thanks!!!

@tat-dat when you set to shape=(1,N) in Input you actually set input shape to (batch_size, 1, N).
Then you take slices [:, 0:1, :] which is (batch_size, 1, N), and [:, 1:2, :] which is (batch_size, 0, N) so it's empty.

@arquolo Thanks!

@arquolo how can I slice a tensor horizontally in a specific size of chunk? how should be the lambda layer

Was this page helpful?
0 / 5 - 0 ratings

Related issues

somewacko picture somewacko  路  3Comments

harishkrishnav picture harishkrishnav  路  3Comments

braingineer picture braingineer  路  3Comments

oweingrod picture oweingrod  路  3Comments

snakeztc picture snakeztc  路  3Comments