loss-function

keras variational autoencoder loss function

keras variational autoencoder loss function Question: I’ve read this blog by Keras on VAE implementation, where VAE loss is defined this way: def vae_loss(x, x_decoded_mean): xent_loss = objectives.binary_crossentropy(x, x_decoded_mean) kl_loss = – 0.5 * K.mean(1 + z_log_sigma – K.square(z_mean) – K.exp(z_log_sigma), axis=-1) return xent_loss + kl_loss I looked at the Keras documentation and the VAE …

Total answers: 3

NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array

NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array Question: I try to pass 2 loss functions to a model as Keras allows that. loss: String (name of objective function) or objective function or Loss instance. See losses. If the model has multiple outputs, you can use a different loss on each output …

Total answers: 12

Keras/Tensorflow: Combined Loss function for single output

Keras/Tensorflow: Combined Loss function for single output Question: I have only one output for my model, but I would like to combine two different loss functions: def get_model(): # create the model here model = Model(inputs=image, outputs=output) alpha = 0.2 model.compile(loss=[mse, gse], loss_weights=[1-alpha, alpha] , …) but it complains that I need to have two …

Total answers: 5

RMSE/ RMSLE loss function in Keras

RMSE/ RMSLE loss function in Keras Question: I try to participate in my first Kaggle competition where RMSLE is given as the required loss function. For I have found nothing how to implement this loss function I tried to settle for RMSE. I know this was part of Keras in the past, is there any …

Total answers: 6

L1/L2 regularization in PyTorch

L1/L2 regularization in PyTorch Question: How do I add L1/L2 regularization in PyTorch without manually computing it? Asked By: Wasi Ahmad || Source Answers: See the documentation. Add a weight_decay parameter to the optimizer for L2 regularization. Answered By: Kashyap Use weight_decay > 0 for L2 regularization: optimizer = torch.optim.Adam(model.parameters(), lr=1e-4, weight_decay=1e-5) Answered By: devil …

Total answers: 7

NaN loss when training regression network

NaN loss when training regression network Question: I have a data matrix in “one-hot encoding” (all ones and zeros) with 260,000 rows and 35 columns. I am using Keras to train a simple neural network to predict a continuous variable. The code to make the network is the following: model = Sequential() model.add(Dense(1024, input_shape=(n_train,))) model.add(Activation(‘relu’)) …

Total answers: 27