TorchScript requires source access in order to carry out compilation for collections.deque

Question:

I’m trying to convert PyTorch FOMM model to TorchScript. As soon as I started to annotate some classes with @torch.jit.script I’ve got an error:

OSError: Can't get source for <class 'collections.deque'>. TorchScript requires source access in order to carry out compilation, make sure original .py files are available.

As I understand that class implemented in CPython therefore cannot be read by TorchScript compiler. I failed to find any pure-Python implementation. How can I overcome this issue?

Here is the class I’m trying to annotate:

import queue
import collections
import threading
import torch

@torch.jit.script
class SyncMaster(object):
    """An abstract `SyncMaster` object.

    - During the replication, as the data parallel will trigger an callback of each module, all slave devices should
    call `register(id)` and obtain an `SlavePipe` to communicate with the master.
    - During the forward pass, master device invokes `run_master`, all messages from slave devices will be collected,
    and passed to a registered callback.
    - After receiving the messages, the master device should gather the information and determine to message passed
    back to each slave devices.
    """

    def __init__(self, master_callback):
        """

        Args:
            master_callback: a callback to be invoked after having collected messages from slave devices.
        """
        self._master_callback = master_callback
        self._queue = queue.Queue()
        self._registry = collections.OrderedDict()
        self._activated = False

    def __getstate__(self):
        return {'master_callback': self._master_callback}

    def __setstate__(self, state):
        self.__init__(state['master_callback'])

    def register_slave(self, identifier):
        """
        Register an slave device.

        Args:
            identifier: an identifier, usually is the device id.

        Returns: a `SlavePipe` object which can be used to communicate with the master device.

        """
        if self._activated:
            assert self._queue.empty(), 'Queue is not clean before next initialization.'
            self._activated = False
            self._registry.clear()
        future = FutureResult()
        self._registry[identifier] = _MasterRegistry(future)
        return SlavePipe(identifier, self._queue, future)

    def run_master(self, master_msg):
        """
        Main entry for the master device in each forward pass.
        The messages were first collected from each devices (including the master device), and then
        an callback will be invoked to compute the message to be sent back to each devices
        (including the master device).

        Args:
            master_msg: the message that the master want to send to itself. This will be placed as the first
            message when calling `master_callback`. For detailed usage, see `_SynchronizedBatchNorm` for an example.

        Returns: the message to be sent back to the master device.

        """
        self._activated = True

        intermediates = [(0, master_msg)]
        for i in range(self.nr_slaves):
            intermediates.append(self._queue.get())

        results = self._master_callback(intermediates)
        assert results[0][0] == 0, 'The first result should belongs to the master.'

        for i, res in results:
            if i == 0:
                continue
            self._registry[i].result.put(res)

        for i in range(self.nr_slaves):
            assert self._queue.get() is True

        return results[0][1]

    @property
    def nr_slaves(self):
        return len(self._registry)
Asked By: serg_zhd

||

Answers:

Switched TorchScript generation method from torch.jit.script to torch.jit.trace and it worked, no need in annotating anything. Alternatively torch.onnx.export works sometimes.

Answered By: serg_zhd

I had this issue when trying to use PyInstaller on a Python script that used torch. I followed step 3 in this Github thread to change the tag to @torch.jit._script_if_tracing in modeling_deberta.py.
(Just note that in the Github answer, there’s a typo in the git clone where is says "transormers" instead of "transformers", and the file path is slightly different: src/transformers/models/deberta/modeling_deberta.py. I also did it in modeling_deberta_v2.py just to be safe.)

Answered By: Lily