Multiprocessing freeze_support() error in Python

Question:

I am new to deep learning concept and has been using google colab to run the deep learning models till now. And while i was trying the same with pycharm this error shows up. Project_name=model,file_name=model.py

from __future__ import print_function, division
import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim import lr_scheduler
import torch.backends.cudnn as cudnn
import numpy as np
import torchvision
from torchvision import datasets, models, transforms
import matplotlib.pyplot as plt
import time
import os
import copy

cudnn.benchmark = True
plt.ion()   # interactive mode


 

data_transforms = {
    'train': transforms.Compose([
        transforms.RandomResizedCrop(224),
        transforms.RandomHorizontalFlip(),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
    'val': transforms.Compose([
        transforms.Resize(256),
        transforms.CenterCrop(224),
        transforms.ToTensor(),
        transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ]),
}

data_dir = '.hymenoptera_data'

image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir, x),
                                          data_transforms[x])
                  for x in ['train', 'val']}
dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x], batch_size=4,
                                             shuffle=True, num_workers=4)
              for x in ['train', 'val']}
dataset_sizes = {x: len(image_datasets[x]) for x in ['train', 'val']}
class_names = image_datasets['train'].classes

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")


##visualize the model

def imshow(inp, title=None):
    """Imshow for Tensor."""
    inp = inp.numpy().transpose((1, 2, 0))
    mean = np.array([0.485, 0.456, 0.406])
    std = np.array([0.229, 0.224, 0.225])
    inp = std * inp + mean
    inp = np.clip(inp, 0, 1)
    plt.imshow(inp)
    if title is not None:
        plt.title(title)
    plt.pause(0.001)  # pause a bit so that plots are updated


 
inputs, classes = next(iter(dataloaders['train']))
 
out = torchvision.utils.make_grid(inputs)

imshow(out, title=[class_names[x] for x in classes])








i do not have any main method , running the model.py directly , is that even a legal move ?idk

ERROR:



(venv) PS C:UsersprasaPycharmProjectsmodel> python model.py
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:python 39libmultiprocessingspawn.py", line 116, in spawn_main
    exitcode = _main(fd, parent_sentinel)
  File "C:python 39libmultiprocessingspawn.py", line 125, in _main
    prepare(preparation_data)
  File "C:python 39libmultiprocessingspawn.py", line 236, in prepare
    _fixup_main_from_path(data['init_main_from_path'])
  File "C:python 39libmultiprocessingspawn.py", line 287, in _fixup_main_from_path
    main_content = runpy.run_path(main_path,
  File "C:python 39librunpy.py", line 268, in run_path
    return _run_module_code(code, init_globals, run_name,
  File "C:python 39librunpy.py", line 97, in _run_module_code
                ...

        The "freeze_support()" line can be omitted if the program
        is not going to be frozen to produce an executable.



Asked By: Prasanjeet Panda

||

Answers:

multiprocessing standard library module does not work correctly if your application does not have main() function but executes it code during import time.

The solution is to move the code to main() function like

import ...
...


def main():
     # your code here....


if __name__ == "__main__":
     main()
Answered By: Mikko Ohtamaa