TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int64 of argument 'x'. in computer vision

Question:

I’m working in training my model.

so, I do:

# create the base pre-trained model
base_model = DenseNet121(weights='/Users/awabe/Desktop/Project/PapilaDB/ClinicalData/DenseNet-BC-121-32-no-top.h5', include_top=False)

x = base_model.output

# add a global spatial average pooling layer
x = GlobalAveragePooling2D()(x)

# and a logistic layer
predictions = Dense(len(labels), activation="sigmoid")(x)

model = Model(inputs=base_model.input, outputs=predictions)
model.compile(optimizer='adam', loss=get_weighted_loss(pos_weights, neg_weights))

then I go to plot section and I use:

history = model.fit_generator(train_generator, 
                              validation_data=test_generator,
                              steps_per_epoch=100, 
                              validation_steps=25, 
                              epochs = 3)

plt.plot(history.history['loss'])
plt.ylabel("loss")
plt.xlabel("epoch")
plt.title("Training Loss Curve")
plt.show()

then it gives me this error message:

TypeError: in user code:

File "/opt/anaconda3/envs/tensorflow/lib/python3.10/site-packages/keras/engine/training.py", line 1160, in train_function  *
    return step_function(self, iterator)
File "/var/folders/p4/gy9qtf594h3d5q85bzzgflz00000gn/T/ipykernel_1809/4264699890.py", line 27, in weighted_loss  *
    loss += -(K.mean((pos_weights[i] * y_true[:,i] * K.log(y_pred[:,i] + epsilon) + neg_weights[i]*(1-y_true[:,i]) * K.log(1-y_pred[:,i]+epsilon))))


TypeError: Input 'y' of 'Mul' Op has type float32 that does not match type int64 of argument 'x'.
Asked By: Awab Elkhair

||

Answers:

as delirium mentioned, I used : tf.cast(variable, tf.float32)

Answered By: Awab Elkhair