nltk tokenization and contractions

Question:

I’m tokenizing text with nltk, just sentences fed to wordpunct_tokenizer. This splits contractions (e.g. ‘don’t’ to ‘don’ +” ‘ “+’t’) but I want to keep them as one word. I’m refining my methods for a more measured and precise tokenization of text, so I need to delve deeper into the nltk tokenization module beyond simple tokenization.

I’m guessing this is common and I’d like feedback from others who’ve maybe had to deal with the particular issue before.

edit:

Yeah this a general, splattershot question I know

Also, as a novice to nlp, do I need to worry about contractions at all?

EDIT:

The SExprTokenizer or TreeBankWordTokenizer seems to do what I’m looking for for now.

Asked By: blueblank

||

Answers:

I’ve worked with NLTK before on this project. When I did, I found that contractions were useful to consider.

However, I did not write custom tokenizer, I simply handled it after POS tagging.

I suspect this is not the answer that you are looking for, but I hope it helps somewhat

Answered By: inspectorG4dget

Which tokenizer you use really depends on what you want to do next. As inspectorG4dget said, some part-of-speech taggers handle split contractions, and in that case the splitting is a good thing. But maybe that’s not what you want. To decide which tokenizer is best, consider what you need for the next step, and then submit your text to http://text-processing.com/demo/tokenize/ to see how each NLTK tokenizer behaves.

Answered By: Jacob

Because the number of contractions are very minimal, one way to do it is to search and replace all contractions to it full equivalent (Eg: “don’t” to “do not”) and then feed the updated sentences into the wordpunct_tokenizer.

Answered By: Neodawn

Use

nltk.WhitespaceTokenizer().tokenize("why don't you?")
>['why', "don't", 'you?']
Answered By: alchemy
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.