U net Multiclass segmentation image input dataset error
Question:
I am trying to do multiclass segmentation with U-net. In the previous trials I tried the binary segmentation and it works. But when I try to do multiclass I am facing this error.
ValueError: 'generator yielded an element of shape (128,192,1) where an element of shape (128,192,5) was expected
This 5 denoted the number of classes. This is how I defined my output layer. output:Tensor("output/sigmoid:0",shape(?,128,192,5),dtype=float32)
.
I kept a crop size of input_shape:(128,192,1)
because of grayscale image
and label_shape:(128,192,5)
Data is loaded in the tensorflow dataset and uses a tf.iterator.
A generator yields data from tf.dataset.
def get_datapoint_generator(self):
def generator():
for i in itertools.count(1):
datapoint_dict=self._get_next_datapoint()
yield datapoint_dict['image'],datapoint_dict['mask']
The _get_next_datapoint_
function gets next datapoint from ram, and processes cropping and augmentation.
Now, where would have it gone wrong that the it doesnt match with the output shape?
Answers:
Can you try to use this implementation? I am using this one but it is in Keras
def sparse_crossentropy(y_true, y_pred):
nb_classes = K.int_shape(y_pred)[-1]
y_true = K.one_hot(tf.cast(y_true[:, :, 0], dtype=tf.int32), nb_classes + 1)
return K.categorical_crossentropy(y_true, y_pred)
I am trying to do multiclass segmentation with U-net. In the previous trials I tried the binary segmentation and it works. But when I try to do multiclass I am facing this error.
ValueError: 'generator yielded an element of shape (128,192,1) where an element of shape (128,192,5) was expected
This 5 denoted the number of classes. This is how I defined my output layer. output:Tensor("output/sigmoid:0",shape(?,128,192,5),dtype=float32)
.
I kept a crop size of input_shape:(128,192,1)
because of grayscale image
and label_shape:(128,192,5)
Data is loaded in the tensorflow dataset and uses a tf.iterator.
A generator yields data from tf.dataset.
def get_datapoint_generator(self):
def generator():
for i in itertools.count(1):
datapoint_dict=self._get_next_datapoint()
yield datapoint_dict['image'],datapoint_dict['mask']
The _get_next_datapoint_
function gets next datapoint from ram, and processes cropping and augmentation.
Now, where would have it gone wrong that the it doesnt match with the output shape?
Can you try to use this implementation? I am using this one but it is in Keras
def sparse_crossentropy(y_true, y_pred):
nb_classes = K.int_shape(y_pred)[-1]
y_true = K.one_hot(tf.cast(y_true[:, :, 0], dtype=tf.int32), nb_classes + 1)
return K.categorical_crossentropy(y_true, y_pred)