Keras Merge Concatenate Failed Because Of Different Input Shape Even Though Input Shape Are The Same
Solution 1:
So.... if the message says you're using these shapes, then you can't concatenate....
[(None, 6), (None, 7, 62), (None, 23, 62), (None, 2, 62)]
You can try to concatenate the last three:
left_combined = keras.layers.Concatenate(axis=1)([l1_conv_net, l2_conv_net, l3_conv_net])
Don't print tensors, print K.int_shape(tensor)
to see the actual shapes. (By the way, something is really going wrong with what you posted because the shapes of the tensors are too weird. The Keras shapes make sense if you're using 1D convolutions or RNNs)
If your backend is not tensorflow, you may have wrong output_shape
parameters in custom or lambda layers somewhere.
Solution 2:
Keras concatenate has some restrictions. The number of dimensions has to be the same, that's why your first tensor fails. You can save that quite quickly by reshaping it to (None, 1, 62). If you are merging along the first axis, all None dimensions have to be the same in the calculations. Looking at the source code, it appears that having an axis as None is not a problem in itself.
So reshape the first tensor and check whether the None axis will always be the same for all axes.
Solution 3:
I'm not a big expert, yet for my case defining the input like that Input(shape=(1,1,) instead of this Input(shape=(1,)) , added the required dimension and merge was excepted... just try to add a dimension with length 1.
Solution 4:
one good solution for concatenating output or intermediate layers is to Flatten them before concatenation:
l1_conv_net_features=Flatten(name="flatten_l1_conv_net")(l1_conv_net)
l2_conv_net_features=Flatten(name="flatten_l2_conv_net")(l2_conv_net)
l3_conv_net_features=Flatten(name="flatten_l3_conv_net")(l3_conv_net)
all_features = concatenate([l1_conv_net_features, l2_conv_net_features,l3_conv_net_features])
Post a Comment for "Keras Merge Concatenate Failed Because Of Different Input Shape Even Though Input Shape Are The Same"