How to make predictions with scikit's Surprise?

Question:

I’m having some trouble understanding how the Surprise workflow. I have a file for training (which I seek to split into training and validation), and a file for testing data. I’m having trouble understanding the difference between a Surprise Dataset and Trainset

# Import data
data_dir = 'DIRECTORY_NAME'
reader = Reader(rating_scale=(1, 5))

# Create pandas dataframes
train_valid_df = pd.read_csv(os.path.join(data_dir, 'TRAINING_FILENAME.csv'))
train_df, valid_df = train_test_split(train_valid_df, test_size=0.2)
test_df = pd.read_csv(os.path.join(data_dir, 'TEST_FILENAME.csv'))

# Create surprise Dataset objects
train_valid_Dataset = Dataset.load_from_df(train_valid_df[['user_id', 'item_id', 'rating']], reader)
train_Dataset = Dataset.load_from_df(train_df[['user_id', 'item_id', 'rating']], reader)
valid_Dataset = Dataset.load_from_df(valid_df[['user_id', 'item_id', 'rating']], reader)
test_Dataset = Dataset.load_from_df(test_df[['user_id', 'item_id', 'rating']], reader)

# Create surprise Trainset object (and testset object?)
train_Trainset = train_data.build_full_trainset()
valid_Testset = trainset.build_anti_testset()

Then, I create my predictor:

algo = KNNBaseline(k=60, min_k=2, sim_options={'name': 'msd', 'user_based': True})

Now, if I want to cross validate I would do

cross_v = cross_validate(algo, all_data, measures=['mae'], cv=10, verbose=True)

Which trains the model (?), but if I wanted to use my fixed validation set, what would I do? This:?

algo.fit(train_Trainset)

After doing this, I tried to get some predictions:

predictions = algo.test(valid_Testset)
print(predictions[0])

With this being the result
enter image description here
But when I try to predict using item and user id numbers, it says such a prediction is impossible:

print(algo.predict('13', '194'))
print(algo.predict('260', '338'))
print(algo.predict('924', '559'))

Yielding:
enter image description here

The first user/item pair is from the training antiset, the second from the validation set, and the third from the training set. I don’t know why this is behaving like this, and I’ve found the documentation confusing at times. Similarly, many tutorials online seem to be training on pandas dataframes which I get errors thrown for. Can anybody clarify what the workflow of surprise actually looks like? How do I train and then make predictions on a testing set?

Thanks!

Asked By: WhoDatBoy

||

Answers:

Hope this helps, since you have separate train and test, we create something similar to your data:

from surprise import Dataset, KNNBaseline, Reader
import pandas as pd
import numpy as np
from surprise.model_selection import cross_validate
reader = Reader(rating_scale=(1, 5))

train_df = pd.DataFrame({'user_id':np.random.choice(['1','2','3','4'],100),
                         'item_id':np.random.choice(['101','102','103','104'],100),
                         'rating':np.random.uniform(1,5,100)})

valid_df = pd.DataFrame({'user_id':np.random.choice(['1','2','3','4'],100),
                         'item_id':np.random.choice(['101','102','103','104'],100),
                         'rating':np.random.uniform(1,5,100)})

Then we need to convert the training data to a surprise.trainset , similar to what you have done:

train_Dataset = Dataset.load_from_df(train_df[['user_id', 'item_id', 'rating']], reader)
valid_Dataset = Dataset.load_from_df(valid_df[['user_id', 'item_id', 'rating']], reader)

train_Dataset = train_Dataset.build_full_trainset()

For fitting, you only need the train_Dataset, for the cross-validation, I am not sure what you are trying to do and i see it’s out of the scope of question for prediction, so we fit:

algo = KNNBaseline(k=60, min_k=2, sim_options={'name': 'msd', 'user_based': True})
algo.fit(train_Dataset)

To predict, you need to provide the input as a list or array, which has the same shape as your input, so for example, if we want to provide the test Dataset, it will be:

testset = [valid_Dataset.df.loc[i].to_list() for i in range(len(valid_Dataset.df))]
algo.test(testset)[:2] 

[Prediction(uid='2', iid='103', r_ui=3.0224818872683845, est=2.8486558674146125, details={'actual_k': 25, 'was_impossible': False}),
 Prediction(uid='2', iid='103', r_ui=4.609064535195377, est=2.8486558674146125, details={'actual_k': 25, 'was_impossible': False})]

If you wanna test one or two values, it will be:

algo.test([['1','101',None]])
Answered By: StupidWolf

For a trained model (SVD), I am also seeing different predictions between model.test vs model.predict, any idea as to why this might be happening:

model1.predict(uid="1533",iid=57549)
#Prediction(uid='1533', iid=57549, r_ui=None, est=4.736261476694327, details={'was_impossible': False})

model1.test([['1533','57549',None]])
#[Prediction(uid='1533', iid='57549', r_ui=None, est=4.415796133559741, details={'was_impossible': False})]
Answered By: Ashish Dhiman