What is the best stemming method in Python?

Question:

I tried all the nltk methods for stemming but it gives me weird results with some words.

Examples

It often cut end of words when it shouldn’t do it :

  • poodle => poodl
  • article articl

or doesn’t stem very good :

  • easily and easy are not stemmed in the same word
  • leaves, grows, fairly are not stemmed

Do you know other stemming libs in python, or a good dictionary?

Thank you

Asked By: PeYoTlL

||

Answers:

Python implementations of the Porter, Porter2, Paice-Husk, and Lovins stemming algorithms for English are available in the stemming package

Answered By: Stephen Lin

The results you are getting are (generally) expected for a stemmer in English. You say you tried “all the nltk methods” but when I try your examples, that doesn’t seem to be the case.

Here are some examples using the PorterStemmer

import nltk
ps = nltk.stemmer.PorterStemmer()
ps.stem('grows')
'grow'
ps.stem('leaves')
'leav'
ps.stem('fairly')
'fairli'

The results are ‘grow’, ‘leav’ and ‘fairli’ which, even if they are what you wanted, are stemmed versions of the original word.

If we switch to the Snowball stemmer, we have to provide the language as a parameter.

import nltk
sno = nltk.stem.SnowballStemmer('english')
sno.stem('grows')
'grow'
sno.stem('leaves')
'leav'
sno.stem('fairly')
'fair'

The results are as before for ‘grows’ and ‘leaves’ but ‘fairly’ is stemmed to ‘fair’

So in both cases (and there are more than two stemmers available in nltk), words that you say are not stemmed, in fact, are. The LancasterStemmer will return ‘easy’ when provided with ‘easily’ or ‘easy’ as input.

Maybe you really wanted a lemmatizer? That would return ‘article’ and ‘poodle’ unchanged.

import nltk
lemma = nltk.wordnet.WordNetLemmatizer()
lemma.lemmatize('article')
'article'
lemma.lemmatize('leaves')
'leaf'
Answered By: Spaceghost

All these stemmers that have been discussed here are algorithmic stemmer,hence they can always produce unexpected results such as

In [3]: from nltk.stem.porter import *

In [4]: stemmer = PorterStemmer()

In [5]: stemmer.stem('identified')
Out[5]: u'identifi'

In [6]: stemmer.stem('nonsensical')
Out[6]: u'nonsens'

To correctly get the root words one need a dictionary based stemmer such as Hunspell Stemmer.Here is a python implementation of it in the following link. Example code is here

>>> import hunspell
>>> hobj = hunspell.HunSpell('/usr/share/myspell/en_US.dic', '/usr/share/myspell/en_US.aff')
>>> hobj.spell('spookie')
False
>>> hobj.suggest('spookie')
['spookier', 'spookiness', 'spooky', 'spook', 'spoonbill']
>>> hobj.spell('spooky')
True
>>> hobj.analyze('linked')
[' st:link fl:D']
>>> hobj.stem('linked')
['link']
Answered By: 0xF

In my chatbot project I have used PorterStemmer However LancasterStemmer also serves the purpose. Ultimate objective is to stem the word to its root so that we can search and compare with the search words inputs.

For Example:
from nltk.stem import PorterStemmer
ps = PorterStemmer()

def SrchpattrnStmmed(self):
    KeyWords =[]
    SrchpattrnTkn = word_tokenize(self.input)
    for token in SrchpattrnTkn:
        if token not in stop_words:
            KeyWords.append(ps.stem(token))
            continue
    #print(KeyWords)
    return KeyWords

Hope this will help..

Answered By: sarvesh Kumar

Stemming is all about removing suffixes(usually only suffixes, as far as I have tried none of the nltk stemmers could remove a prefix, forget about infixes).
So we can clearly call stemming as a dumb/ not so intelligent program. It doesn’t check if a word has a meaning before or after stemming.
For eg. If u try to stem “xqaing”, although not a word, it will remove “-ing” and give u “xqa”.

So, in order to use a smarter system, one can use lemmatizers.
Lemmatizers uses well-formed lemmas (words) in form of wordnet and dictionaries.
So it always returns and takes a proper word. However, it is slow because it goes through all words in order to find the relevant one.

Answered By: Ritveak

Stemmers vary in their aggressiveness. Porter is one of the monst aggressive stemmer for English. I find it usually hurts more than it helps.
On the lighter side you can either use a lemmatizer instead as already suggested,
or a lighter algorithmic stemmer.
The limitation of lemmatizers is that they cannot handle unknown words.

Personally I like the Krovetz stemmer which is a hybrid solution, combing a dictionary lemmatizer and a light weight stemmer for out of vocabulary words. Krovetz also kstem or light_stemmer option in Elasticsearch. There is a python implementation on pypi https://pypi.org/project/KrovetzStemmer/, though that is not the one that I have used.

Another option is the the lemmatizer in spaCy. Afte processing with spaCy every token has a lemma_ attribute. (note the underscore lemma hold a numerical identifier of the lemma_) – https://spacy.io/api/token

Here are some papers comparing various stemming algorithms:

Answered By: Daniel Mahler

There are already very good answers available in this question but I also wanted to add some information which I think it might be useful. On my research I found out a link which gives great details about the Stemming and lemmatization https://nlp.stanford.edu/IR-book/html/htmledition/stemming-and-lemmatization-1.html.

To give a short summary here is some insights of the page:

Stemming and lemmatization

For grammatical reasons, documents are going to use different forms of a word, such as organize, organizes, and organizing. Additionally, there are families of derivationally related words with similar meanings, such as democracy, democratic, and democratization. In many situations, it seems as if it would be useful for a search for one of these words to return documents that contain another word in the set.

The goal of both stemming and lemmatization is to reduce inflectional forms and sometimes derivationally related forms of a word to a common base form. For instance:

am, are, is -> be
car, cars, car’s, cars’ -> car

The result of this mapping of text will be something like:
the boy’s cars are different colors -> the boy car be differ color

Also nltk package has been updated and you can import WordNetLemmatizer with from nltk.stem import WordNetLemmatizer. And lemmatizer requires a package already downloaded before use, the command below works well with version 3.6.1.

import nltk

nltk.download("wordnet")
Answered By: abdullahselek
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.