image-segmentation

Inference on image dataset without annotations in detectron2

Inference on image dataset without annotations in detectron2 Question: Motivation I have a detectron2 Mask R-CNN baseline model that is good enough to predict some object boundaries accurately. I’d like to convert these predicted boundaries to COCO polygons to annotate the next dataset (supervised labeling). To do this, I need to run inference on an …

Total answers: 1

Count number of classes in a semantic segmented image

Count number of classes in a semantic segmented image Question: I have an image that is the output of a semantic segmentation algorithm, for example this one I looked online and tried many pieces of code but none worked for me so far. It is clear to the human eye that there are 5 different …

Total answers: 3

Adding Dropout Layers to U_Net Segmentation_Models

Adding Dropout Layers to U_Net Segmentation_Models Question: I am using U_Net segmentation model for medical images segmentation with Kersa and Tensorflow 2. I’d like to add a dropout to the model, but I don’t know where to add it? Asked By: Emy Ibrahim || Source Answers: Yes there isn’t dropout layers in the implementation of …

Total answers: 1

Why does unet have classes?

Why does unet have classes? Question: import torch import torch.nn as nn import torch.nn.functional as F class double_conv(nn.Module): ”'(conv => BN => ReLU) * 2”’ def __init__(self, in_ch, out_ch): super(double_conv, self).__init__() self.conv = nn.Sequential( nn.Conv2d(in_ch, out_ch, 3, padding=1), nn.BatchNorm2d(out_ch), nn.ReLU(inplace=True), nn.Conv2d(out_ch, out_ch, 3, padding=1), nn.BatchNorm2d(out_ch), nn.ReLU(inplace=True) ) def forward(self, x): x = self.conv(x) return x …

Total answers: 1

How to get iou of single class in keras semantic segmentation?

How to get iou of single class in keras semantic segmentation? Question: I am using the Image segmentation guide by fchollet to perform semantic segmentation. I have attempted modifying the guide to suit my dataset by labelling the 8-bit img mask values into 1 and 2 like in the Oxford Pets dataset. (which will be …

Total answers: 2

Using U-net in Python with 3-channel input images for image segmentation

Using U-net in Python with 3-channel input images for image segmentation Question: I am using unet for image segmentation, using the code outlined herein. My input images are 256x256x3. while the corresponding segmentation masks are 256×256. I have changed the size for the input to Unet: def unet(pretrained_weights = None,input_size = (256,256,3)): and get a …

Total answers: 2

How to get a correct output predictions from unet_learner (fastai)?

How to get a correct output predictions from unet_learner (fastai)? Question: Please, I’m working on an image segmentation project and I used the fastai library (specifically the unet_learner). I’ve trained my model and very thing is fine here is my code (in the training phase): #codes = np.loadtxt(‘codes.txt’, dtype=str) codes = np.array([‘bg’, ‘edge’], dtype='<U4′)# bg= …

Total answers: 2

Removing small contours and noises from a thresholded image in Python

Removing small contours and noises from a thresholded image in Python Question: Are there any methods or functions to remove small contours given an already thresholded image through OpenCV in Python? My aim is only letting the rectangles and soon separate these overlapped ones: Asked By: korzuswolf || Source Answers: If the blobs you are …

Total answers: 2

Tensorflow 2 throwing ValueError: as_list() is not defined on an unknown TensorShape

Tensorflow 2 throwing ValueError: as_list() is not defined on an unknown TensorShape Question: I’m trying to train a Unet model in Tensorflow 2.0 which takes as input an image and a segmentation mask, but I’m getting a ValueError : as_list() is not defined on an unknown TensorShape. The stack trace shows the problem occurs during …

Total answers: 3

Keras U-Net weighted loss implementation

Keras U-Net weighted loss implementation Question: I’m trying to separate close objects as was shown in the U-Net paper (here). For this, one generates weight maps which can be used for pixel-wise losses. The following code describes the network I use from this blog post. x_train_val = # list of images (imgs, 256, 256, 3) …

Total answers: 2