Where should the LSTM be placed in my CNN for NLP and how do I connect it?

Question:

I have a problem. I would like to use LSTM in my 1D-CNN to get an improvement in my NLP task.
The problem is that I don’t know exactly where to put the LSTM. I have found the following.

A CNN LSTM can be defined by adding CNN layers on the front end followed by LSTM layers with a Dense layer on the output.

(Source: https://machinelearningmastery.com/cnn-long-short-term-memory-networks/)

However, if I set it up like this (see code below), I get the following error

ValueError: Input 0 of layer "lstm_4" is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (None, 128)

This is because LSTM expects a 3D input array. Is there an option to fix this error and use LSTM at this position? Or should it be somewhere else?

from keras.models import Sequential
from keras.layers import Input, Embedding, Dense, GlobalMaxPooling1D, Conv2D, MaxPool2D, LSTM, Bidirectional, Lambda, Conv1D, MaxPooling1D, GlobalMaxPooling1D

model_lstm = Sequential()

model_lstm.add(
        Embedding(vocab_size
                ,embed_size
                ,weights = [embedding_matrix] #Supplied embedding matrix created from glove
                ,input_length = maxlen
                ,trainable=False)
         )
model_lstm.add(SpatialDropout1D(rate = 0.4))
model_lstm.add(Conv1D(256, 7, activation="relu"))
model_lstm.add(MaxPooling1D())
#model_lstm.add(LSTM(128, dropout=0.3, recurrent_dropout=0.3, return_sequences=True))
model_lstm.add(Conv1D(128, 5, activation="relu"))
model_lstm.add(MaxPooling1D())
model_lstm.add(GlobalMaxPooling1D())
model_lstm.add(LSTM(128, dropout=0.3,return_sequences=True))
model_lstm.add(Dropout(0.3))
model_lstm.add(Dense(128, activation="relu"))
model_lstm.add(Dense(4, activation='softmax'))
print(model_lstm.summary())

Complete Code

print("Train shape : ",train_X2.shape)
print("Test shape : ",test_X2.shape)

## Tokenize the sentences
tokenizer = Tokenizer(num_words=num_unique_words)
tokenizer.fit_on_texts(list(train_X2))
train_X2 = tokenizer.texts_to_sequences(train_X2)
test_X2 = tokenizer.texts_to_sequences(test_X2)

## Pad the sentences 
train_X = pad_sequences(train_X2, maxlen=maxlen)
test_X = pad_sequences(test_X2, maxlen=maxlen)

word_index = tokenizer.word_index
vocab_size = len(tokenizer.word_index) + 1

from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.utils import to_categorical

#label encoding
le = LabelEncoder()
train_y = le.fit_transform(train_y2.tolist())
test_y = le.transform(test_y2.tolist())

#one hot encoding
train_y = to_categorical(train_y)
test_y = to_categorical(test_y)

# Word2Vec as pretrained embedding
import gensim
from gensim.models import Word2Vec
from gensim.utils import simple_preprocess

from gensim.models.keyedvectors import KeyedVectors
NUM_WORDS=20000
word_vectors = KeyedVectors.load_word2vec_format(r'./input/GoogleNews-vectors-negative300.bin', binary=True)

EMBEDDING_DIM=300
vocabulary_size=min(len(word_index)+1,NUM_WORDS)
embedding_matrix = np.zeros((vocabulary_size, EMBEDDING_DIM))
for word, i in word_index.items():
    if i>=NUM_WORDS:
        continue
    try:
        embedding_vector = word_vectors[word]
        embedding_matrix[i] = embedding_vector
    except KeyError:
        embedding_matrix[i]=np.random.normal(0,np.sqrt(0.25),EMBEDDING_DIM)

del(word_vectors)

from keras.layers import Embedding
embedding_layer = Embedding(vocabulary_size,
                            EMBEDDING_DIM,
                            weights=[embedding_matrix],
                            trainable=True)

from keras.layers import Embedding
EMBEDDING_DIM=300
vocabulary_size=min(len(word_index)+1,NUM_WORDS)

embedding_layer = Embedding(vocabulary_size,
                            EMBEDDING_DIM)

# CNN
Asked By: Test

||

Answers:

Maybe try removing the GlobalMaxPooling1D layer which reduces your tensor to 2D. For example try copy and run this:

from keras.models import Sequential
from keras.layers import Input, Embedding, Dense, GlobalMaxPooling1D, Conv2D, MaxPool2D, LSTM, Bidirectional, Lambda, Conv1D, MaxPooling1D, GlobalMaxPooling1D

model_lstm = Sequential()

model_lstm.add(
        Embedding(vocab_size
                ,embed_size
                ,weights = [embedding_matrix] #Supplied embedding matrix created from glove
                ,input_length = maxlen
                ,trainable=False)
         )
model_lstm.add(SpatialDropout1D(rate = 0.4))
model_lstm.add(Conv1D(256, 7, activation="relu"))
model_lstm.add(MaxPooling1D())
#model_lstm.add(LSTM(128, dropout=0.3, recurrent_dropout=0.3, return_sequences=True))
model_lstm.add(Conv1D(128, 5, activation="relu"))
model_lstm.add(MaxPooling1D())
model_lstm.add(LSTM(128, dropout=0.3, return_sequences=False))
model_lstm.add(Dropout(0.3))
model_lstm.add(Dense(128, activation="relu"))
model_lstm.add(Dense(4, activation='softmax'))
print(model_lstm.summary())
Answered By: AloneTogether

Remain all your works and make senses that create labels without local dictionary or quick search from single line string. All remains you don’t need to review forcefully handling continue.

it is simply, I try to make it without creating a tokenizer when flat mapping and labels create from separated sources and single line string the song lyrics.

Sample: This way is easy for you not to have quick words mapping you can insert manually, see from the sourcecodes you need to filled the number of quick words search.

import os
from os.path import exists

import tensorflow as tf
import tensorflow_text as tft
import matplotlib.pyplot as plt

import gensim
from gensim.models import Word2Vec
from gensim.utils import simple_preprocess

from gensim.models.keyedvectors import KeyedVectors

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
None
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
physical_devices = tf.config.experimental.list_physical_devices('GPU')
assert len(physical_devices) > 0, "Not enough GPU hardware devices available"
config = tf.config.experimental.set_memory_growth(physical_devices[0], True)
print(physical_devices)
print(config)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Variables
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
input_word = tf.constant(' 'Cause it's easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Slipping outta your hand into the dirt Easy as an ice cream sundae Every dancer gets a little hurt Easy as an ice cream sundae Oh, easy as an ice cream sundae ')
dataset = tf.data.Dataset.from_tensors( tf.strings.bytes_split(input_word) )
window_size = 6
dataset = dataset.map( lambda x:  tft.sliding_window(x, width=window_size, axis=0) ).flat_map(tf.data.Dataset.from_tensor_slices)
dataset = dataset.batch(1)

list_word = []
label = []
vocab = [ "a", "b", "c", "d", "e", "f", "g", "h", "I", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "_" ]
vocab_hot = [ "ice" ]
layer = tf.keras.layers.StringLookup(vocabulary=vocab)
layer_hot = tf.keras.layers.StringLookup(vocabulary=vocab_hot)

for example in dataset.take(200):
    sequences_mapping_string = layer(example[0])
    sequences_mapping_string = tf.constant( sequences_mapping_string, shape=(1, 6) )
    list_word.append(sequences_mapping_string.numpy())

    sequences_mapping_string = tf.reduce_sum(layer_hot( example[0][0] + example[0][1] + example[0][2] ))
    sequences_mapping_string = tf.constant( sequences_mapping_string, shape=(1, 1) )
    
    label.append(sequences_mapping_string.numpy())

list_word = tf.constant(list_word, shape=(200, 1, 6, 1), dtype=tf.int64)
label = tf.constant(label, shape=(200, 1, 1, 1), dtype=tf.int64)

dataset = tf.data.Dataset.from_tensor_slices((list_word, label))

checkpoint_path = "F:\models\checkpoint\" + os.path.basename(__file__).split('.')[0] + "\TF_DataSets_01.h5"
checkpoint_dir = os.path.dirname(checkpoint_path)

if not exists(checkpoint_dir) : 
    os.mkdir(checkpoint_dir)
    print("Create directory: " + checkpoint_dir)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Class / Definition
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class MyLSTMLayer( tf.keras.layers.LSTM ):
    def __init__(self, units, return_sequences, return_state):
        super(MyLSTMLayer, self).__init__( units, return_sequences=True, return_state=False )
        self.num_units = units

    def build(self, input_shape):
        self.kernel = self.add_weight("kernel",
        shape=[int(input_shape[-1]),
        self.num_units])

    def call(self, inputs):
        return tf.matmul(inputs, self.kernel)                       

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Callback
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
class custom_callback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if( logs['accuracy'] >= 0.97 ):
            self.model.stop_training = True
    
custom_callback = custom_callback()

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Initialize
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
mycustomlayer = MyLSTMLayer( 64, True, False )
mycustomlayer_2 = MyLSTMLayer( 16, True, False )

model = tf.keras.models.Sequential([
    tf.keras.layers.InputLayer(input_shape=(6, 1)),
    tf.keras.layers.Embedding(1000, 128, input_length=1),
    tf.keras.layers.Reshape(( 6, 128 )),
    tf.keras.layers.SpatialDropout1D( rate = 0.4 ),
    tf.keras.layers.Conv1D(32, 6, activation="relu"),
    tf.keras.layers.MaxPooling1D(strides=1, pool_size=1),
    ### LSTM
    mycustomlayer,
    tf.keras.layers.Reshape(( 1, 1, 64 )),
    tf.keras.layers.UpSampling2D( size=(4, 4), data_format=None, interpolation='nearest' ),
    tf.keras.layers.Conv1D(16, 3, activation="relu"),
    tf.keras.layers.Reshape(( 8, 16 )),
    tf.keras.layers.MaxPooling1D(),
    tf.keras.layers.GlobalMaxPooling1D(),
    ### LSTM
    tf.keras.layers.Reshape(( 1, 16 )),
    mycustomlayer_2,
    tf.keras.layers.Dropout(0.3),
    tf.keras.layers.Dense(128, activation="relu"),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(4),
    
], name="MyModelClassification")

model.build()
model.summary()


"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Optimizer
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
optimizer = tf.keras.optimizers.SGD(
    learning_rate=0.000001,
    momentum=0.5,
    nesterov=True,
    name='SGD',
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Loss Fn
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""                               
lossfn = tf.keras.losses.SparseCategoricalCrossentropy(
    from_logits=False,
    reduction=tf.keras.losses.Reduction.AUTO,
    name='sparse_categorical_crossentropy'
)

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Model Summary
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
model.compile(optimizer=optimizer, loss=lossfn, metrics=['accuracy'])

"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: FileWriter
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
if exists(checkpoint_path) :
    model.load_weights(checkpoint_path)
    print("model load: " + checkpoint_path)
    input("Press Any Key!")
    
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
: Training
"""""""""""""""""""""""""""""""""""""""""""""""""""""""""
history = model.fit(dataset, batch_size=100, epochs=3, callbacks=[custom_callback] )
model.save_weights(checkpoint_path)

Output:

Model: "MyModelClassification"
_________________________________________________________________
 Layer (type)                Output Shape              Param #
=================================================================
 embedding (Embedding)       (None, 6, 1, 128)         128000

 reshape (Reshape)           (None, 6, 128)            0

 spatial_dropout1d (SpatialD  (None, 6, 128)           0
 ropout1D)

 conv1d (Conv1D)             (None, 1, 32)             24608

 max_pooling1d (MaxPooling1D  (None, 1, 32)            0
 )

 my_lstm_layer (MyLSTMLayer)  (None, 1, 64)            2048

 reshape_1 (Reshape)         (None, 1, 1, 64)          0

 up_sampling2d (UpSampling2D  (None, 4, 4, 64)         0
 )

 conv1d_1 (Conv1D)           (None, 4, 2, 16)          3088

 reshape_2 (Reshape)         (None, 8, 16)             0

 max_pooling1d_1 (MaxPooling  (None, 4, 16)            0
 1D)

 global_max_pooling1d (Globa  (None, 16)               0
 lMaxPooling1D)

 reshape_3 (Reshape)         (None, 1, 16)             0

 my_lstm_layer_1 (MyLSTMLaye  (None, 1, 16)            256
 r)

 dropout (Dropout)           (None, 1, 16)             0

 dense (Dense)               (None, 1, 128)            2176

 flatten (Flatten)           (None, 128)               0

 dense_1 (Dense)             (None, 4)                 516

=================================================================
Total params: 160,692
Trainable params: 160,692
Non-trainable params: 0
_________________________________________________________________
Epoch 1/3
2022-10-14 16:33:44.261736: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8100
200/200 [==============================] - 3s 5ms/step - loss: 0.3487 - accuracy: 0.9000
Epoch 2/3
200/200 [==============================] - 1s 5ms/step - loss: 0.2064 - accuracy: 0.9850
Answered By: Jirayu Kaewprateep