How to avoid reloading ML model every time when I call python script?

Question:

I have two files, file1.py which have ML model size of 1GB and file2.py which calls get_vec() method from file1 and receives vectors in return. ML model is being loaded everytime when file1 get_vec() method is called. This is where it is taking lots of time (around 10s) to load the model from disk.

I want to tell file1 somehow not to reload model every time but utilize loaded model from earlier calls.

Sample code is as follows

# File1.py

import spacy
nlp = spacy.load('model')

def get_vec(post):
    doc = nlp(post)
    return doc.vector

File2.py

from File1 import get_vec

df['vec'] = df['text'].apply(lambda x: get_vec(x))

So here, it is taking 10 to 12 seconds in each call. This seems small code but it is a part of a large project and I can not put both in the same file.

Update1:

I have done some research and came to know that I can use Redis to store model in cache first time it runs and thereafter I can read the model from cache directly. I tried it for testing with Redis as follows

import spacy
import redis

nlp = spacy.load('en_core_web_lg')
r = redis.Redis(host = 'localhost', port = 6379, db = 0)
r.set('nlp', nlp)

It throws an error

DataError: Invalid input of type: 'English'. Convert to a bytes, string, int or float first.

Seems, type(nlp) is English() and it need to convert in a suitable format. So I tried to use pickle as well to convert it. But again, pickle is taking lots of time in encoding and decoding. Is there anyway to store this in Redis?

Can anybody suggest me how can I make it faster? Thanks.

Asked By: Samual

||

Answers:

Save the model once trained.
And start using python as an Object Orientated Programming language not a script language.

Answered By: P.Kagwe

Use Flask.
See how this user tried to implement here: Simple Flask app using spaCy NLP hangs intermittently

Send your data frame data to your Flask through an HTTP request. Or you may save as a file and sent the file to the server.

Just load the model to a global variable and use the variable in the app code.

Answered By: Sushant Gautam

Your problem is not clear to me. nlp = spacy.load('model') this line is executed only once in given code at the time of import.
As every call to get_vec is not loading the model even then if it is taking 10-12 secs per call to get_vec then nothing can be done in your case.

Answered By: DHANANJAY RAUT

If all your syntax is correct then this should not load the model more than once. (Only in the constructor of the ml class)

# File1.py

import spacy
class ml:
   def __init__(self, model_path):
       self.nlp = spacy.load(model_path) # 'model'
   def get_vec(self, post):
       return self.nlp(post).vector


# File2.py

from File1 import ml

my_ml = ml('model') # pass model path

df['vec'] = df['text'].apply(lambda x: my_ml.get_vec(x))

Answered By: Zabir Al Nazi

Heres how to do it

Step 1) create a function in python and load your model in that function

model=None
def load_model():

    global model
    model = ResNet50(weights="imagenet")

if you carefully observe first I assigned variable model to None. Then inside load_model function I loaded a model.

Also I made sure the variable model is made global so that it can be accessed from outside this function. The intuition here is we load model object in a global variable. So that we can access this variable anywhere within the code.

Now that we have our tools ready (i.e we can access the model from anywhere within this code ) lets freeze this model in your computers RAM. This is done by:

if __name__ == "__main__":
    print(("* Loading Keras model and Flask starting server..."
        "please wait until server has fully started"))
    load_model()
    app.run()

Now what’s the use of freezing model in RAM without using it. So, to use it I use POST request in flask

@app.route("/predict", methods=["POST"])
def predict():

    if flask.request.method == "POST":

            output=model.predict(data)  #what you want to do with frozen model goes here

So using this trick you can freeze model in RAM, access it using a global variable. and then use it in your code.

Answered By: Ajinkya

I am trying to achieve the similar thing using Python and Dockerized lambda.
Although I am currently using one single Python script that has 2 functions: load() and implement(). load() loads the model and implement() does the further computation. I have prepared a docker image of this script and saved it in ECR repository and used the same image while building lambda. Now the entire computation works well in approx 400 ms. I need to reduce this computation time.

Answered By: Ayush Srivastava