Invalid argument: Dimension -972891 must be >= 0

Question:

I have created a data pipeline using tf.data for speech recognition using the following code snippets:

def get_waveform_and_label(file_path):
    label = tf.strings.split(file_path, os.path.sep)[-2]

    audio_binary = tf.io.read_file(file_path)
    audio, _ = tf.audio.decode_wav(audio_binary)
    waveform = tf.squeeze(audio, axis=-1)
    
    return waveform, label

def get_spectrogram(waveform):
    # Padding for files with less than 16000 samples
    # Generate zeros w.r.t how many the waveform lacks
    zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32)

    # Concatenate audio with padding so that all audio clips will be of the same length
    waveform = tf.cast(waveform, tf.float32)
    waveform = tf.concat([waveform, zero_padding], 0)

    spectrogram = tf.signal.stft(waveform, frame_length=255, frame_step=128)
    spectrogram = tf.abs(spectrogram)

    return spectrogram

def get_spectrogram_and_label_id(audio, label):
    spectrogram = get_spectrogram(audio)
    spectrogram = tf.expand_dims(spectrogram, -1)
    
    label_id = tf.argmax(label == np.array(labels))
    label_onehot = tf.one_hot(label_id, len(labels))
    
    return spectrogram, label_onehot

files_ds = tf.data.Dataset.from_tensor_slices(files)
waveform_ds = files_ds.map(get_waveform_and_label, num_parallel_calls=tf.data.AUTOTUNE)
spectrogram_ds = waveform_ds.map(get_spectrogram_and_label_id, num_parallel_calls=tf.data.AUTOTUNE)

These snippets are borrowed from https://www.tensorflow.org/tutorials/audio/simple_audio#build_and_train_the_model.

And my model is defined as below:

import tensorflow as tf

inputs = tf.keras.layers.Input(shape=(input_shape))
x = tf.keras.layers.BatchNormalization()(inputs)

x = tf.keras.layers.Conv2D(8,13, padding='same', activation='relu', strides=1)(x)
x = tf.keras.layers.MaxPooling2D(3)(x)
x = tf.keras.layers.Dropout(0.4)(x)
x = tf.keras.layers.BatchNormalization()(x)

x = tf.keras.layers.Conv2D(32, 11, padding='same', activation='relu', strides=1)(x)
x = tf.keras.layers.MaxPooling2D(3)(x)
x = tf.keras.layers.Dropout(0.4)(x)
x = tf.keras.layers.BatchNormalization()(x)

x = tf.keras.layers.Conv2D(256, 9, padding='same', activation='relu', strides=1)(x)
x = tf.keras.layers.MaxPooling2D(3)(x)
x = tf.keras.layers.Dropout(0.4)(x)
x = tf.keras.layers.BatchNormalization()(x)

x = tf.keras.layers.Flatten()(x)
x = tf.keras.layers.Dense(512, activation='relu')(x)
outputs = tf.keras.layers.Dense(len(labels), activation="softmax")(x)

model = tf.keras.models.Model(inputs, outputs)

model.compile(loss="categorical_crossentropy",
              optimizer=tf.keras.optimizers.Adam(), 
              metrics=['accuracy'])
model.summary()

When I start training process this error appears after a few iterations:

> InvalidArgumentError: 2 root error(s) found.   

> (0) Invalid argument: 
> Dimension -972891 must be >= 0     [[{{node zeros}}]]     
> [[IteratorGetNext]]   
> [[categorical_crossentropy/softmax_cross_entropy_with_logits/Shape_2/_6]]

> (1) Invalid argument:  Dimension -972891 must be >= 0      [[{{node
> zeros}}]]      [[IteratorGetNext]] 0 successful operations. 0 derived
> errors ignored. [Op:__inference_train_function_6412]
> 
> Function call stack: train_function -> train_function
Asked By: Soroush

||

Answers:

I have found that the issue happened in the padding step, I mean

zero_padding = tf.zeros([16000] - tf.shape(waveform), dtype=tf.float32)
waveform = tf.cast(waveform, tf.float32)
waveform = tf.concat([waveform, zero_padding], 0)

I’ve replaced the padding step by tf.signal.frame and the issue is resolved.

Answered By: Soroush

This error occurs because output of tf.shape(waveform) is greater than 16000. You need to increase 16000 to more than the value given by tf.shape(waveform).

I suggest adding the line print(tf.shape(waveform)) above, so you can see what it needs to be increased to.

Answered By: CharlieMaunder

I also got this same issue when I tried, check the frequency(sampling rate) of your wave file whether or not its 16000 or not, if no you could change it to 16000 using ffmpeg or any other tool.And still the issue remains same you can just check the sample count of your wave file (sample count should be 16000).

If not you can change either time duration or sample count as this three of them are related as
sampling rate = sample count / time so even though your sampling rate decrease you sample count will decrease but it would be greater than 16000 if wav file is not 1 sec.

Answered By: nitesh mishra

Yeah, can you explain how you did do that?

Answered By: Anuvab Sen