How to make inference from a model using Lambda?
Question:
I trained an image segmentation model with Mask-RCNN and my goal now is to make inference using AWS Lambda, for this I am modifying the Deploy multiple machine learning models for inference on AWS Lambda and Amazon EFS, my project tree is:
.
├── Dockerfile
├── __init__.py
├── app.py
├── requirements.txt
└── maskrcnn
├── config.py
├── __init__.py
├── m_rcnn.py
├── visualize.py
├── mask_rcnn_coco.h5
├── mask_rcnn_object_0005.h5
└── model.py
So instead of trying to load my models into the EFS mount point, I decided to load my models into the maskrcnn
directory, which I have add a COPY statement in Dockerfile, using COPY maskrcnn/ ./maskrcnn
.
When I invoke my code locally, it works perfectly. However, after deploying the project it doesn’t work anymore. When I try to do inference through the endoint of the GATEWAY API, I get the following response:
[ERROR] OSError: [Errno 38] Function not implemented
Traceback (most recent call last):
File "/var/task/app.py", line 32, in lambda_handler
r = test_model.detect([image])[0]
File "/var/task/maskrcnn/model.py", line 2545, in detect
self.keras_model.predict([molded_images, image_metas, anchors], verbose=0)
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_v1.py", line 988, in predict
return func.predict(
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 703, in predict
return predict_loop(
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 386, in model_iteration
aggregator.create(batch_outs)
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_utils_v1.py", line 446, in create
SliceAggregator(self.num_samples, self.batch_size)))
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_utils_v1.py", line 355, in __init__
self._pool = get_copy_pool()
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_utils_v1.py", line 323, in get_copy_pool
_COPY_POOL = multiprocessing.pool.ThreadPool(_COPY_THREADS)
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 925, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 196, in __init__
self._change_notifier = self._ctx.SimpleQueue()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 113, in SimpleQueue
return SimpleQueue(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/queues.py", line 336, in __init__
self._rlock = ctx.Lock()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
Answers:
I was able to run the project once by changing the python version from 3.8 to 3.7 in Dockerfile.
I trained an image segmentation model with Mask-RCNN and my goal now is to make inference using AWS Lambda, for this I am modifying the Deploy multiple machine learning models for inference on AWS Lambda and Amazon EFS, my project tree is:
.
├── Dockerfile
├── __init__.py
├── app.py
├── requirements.txt
└── maskrcnn
├── config.py
├── __init__.py
├── m_rcnn.py
├── visualize.py
├── mask_rcnn_coco.h5
├── mask_rcnn_object_0005.h5
└── model.py
So instead of trying to load my models into the EFS mount point, I decided to load my models into the maskrcnn
directory, which I have add a COPY statement in Dockerfile, using COPY maskrcnn/ ./maskrcnn
.
When I invoke my code locally, it works perfectly. However, after deploying the project it doesn’t work anymore. When I try to do inference through the endoint of the GATEWAY API, I get the following response:
[ERROR] OSError: [Errno 38] Function not implemented
Traceback (most recent call last):
File "/var/task/app.py", line 32, in lambda_handler
r = test_model.detect([image])[0]
File "/var/task/maskrcnn/model.py", line 2545, in detect
self.keras_model.predict([molded_images, image_metas, anchors], verbose=0)
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_v1.py", line 988, in predict
return func.predict(
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 703, in predict
return predict_loop(
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_arrays_v1.py", line 386, in model_iteration
aggregator.create(batch_outs)
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_utils_v1.py", line 446, in create
SliceAggregator(self.num_samples, self.batch_size)))
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_utils_v1.py", line 355, in __init__
self._pool = get_copy_pool()
File "/var/lang/lib/python3.8/site-packages/tensorflow/python/keras/engine/training_utils_v1.py", line 323, in get_copy_pool
_COPY_POOL = multiprocessing.pool.ThreadPool(_COPY_THREADS)
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 925, in __init__
Pool.__init__(self, processes, initializer, initargs)
File "/var/lang/lib/python3.8/multiprocessing/pool.py", line 196, in __init__
self._change_notifier = self._ctx.SimpleQueue()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 113, in SimpleQueue
return SimpleQueue(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/queues.py", line 336, in __init__
self._rlock = ctx.Lock()
File "/var/lang/lib/python3.8/multiprocessing/context.py", line 68, in Lock
return Lock(ctx=self.get_context())
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 162, in __init__
SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
File "/var/lang/lib/python3.8/multiprocessing/synchronize.py", line 57, in __init__
sl = self._semlock = _multiprocessing.SemLock(
I was able to run the project once by changing the python version from 3.8 to 3.7 in Dockerfile.