Training, Validation and Test sets for imbalanced datasets in Machine Learning

Question:

I am working on an NLP task for a classification problem. My dataset is imbalanced and some authors have only 1 text, and thus I want to have this text only in the training set. As for the other authors I need to split the dataset into 70% training set, 15% validation set and 15% test set.

I tried to use train_test_split function from sklearn, but the results aren’t that good.

My dataset is a dataframe that looks like this

Title   Preprocessed_Text   Label
-----   -----------------   -----

Please help me out.

Asked By: user18002341

||

Answers:

Whit only One sample of a particular class it seems impossible to measure the classification performance on this class. So I recommend using one or more oversampling approaches to overcome the imbalance problem ([a hands-on article on it][1]). As a matter of fact, you must pay more attention to splitting the data in such a way that preserves the prior probability of each class (for example by setting the stratify argument in train_test_split). In addition, there are some considerations about the scoring method you must take into account (for example accuracy is not the best fit for scoring).

Answered By: meti

It is rather hard to obtain good classification results for a class that contains only 1 instance (at least for that specific class). Regardless, for imbalanced datasets, one should use stratified train_test_split (using stratify=y), which preserves the same proportions of instances in each class as observed in the original dataset.

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, stratify=y, test_size=0.25)

I should also add that if the dataset is rather small, let’s say no more than 100 instances, it would be preferable to use cross-validation instead of train_test_split, and more specifically, StratifiedKFold or RepeatedStratifiedKFold that returns stratified folds (see this answer to understand the difference between the two).

When it comes to evaluation, you should consider using metrics such as Precision, Recall and F1-score (the harmonic mean of the Precision and Recall), using the average weighted score for each of these, which uses a weight that depends on the number of true instances of each class. As per the documentation:

‘weighted’:

Calculate metrics for each label, and find their average
weighted by support (the number of true instances for each label).
This alters ‘macro’ to account for label imbalance; it can result in
an F-score that is not between precision and recall.

Answered By: Chris
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.