TypeError: expected string or bytes-like object – with Python/NLTK word_tokenize

Question:

I have a dataset with ~40 columns, and am using .apply(word_tokenize) on 5 of them like so: df['token_column'] = df.column.apply(word_tokenize).

I’m getting a TypeError for only one of the columns, we’ll call this problem_column

TypeError: expected string or bytes-like object

Here’s the full error (stripped df and column names, and pii), I’m new to Python and am still trying to figure out which parts of the error messages are relevant:

TypeError                                 Traceback (most recent call last)
<ipython-input-51-22429aec3622> in <module>()
----> 1 df['token_column'] = df.problem_column.apply(word_tokenize)

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagespandascoreseries.py in apply(self, func, convert_dtype, args, **kwds)
   2353             else:
   2354                 values = self.asobject
-> 2355                 mapped = lib.map_infer(values, f, convert=convert_dtype)
   2356 
   2357         if len(mapped) and isinstance(mapped[0], Series):

pandas_libssrcinference.pyx in pandas._libs.lib.map_infer (pandas_libslib.c:66440)()

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenize__init__.py in word_tokenize(text, language, preserve_line)
    128     :type preserver_line: bool
    129     """
--> 130     sentences = [text] if preserve_line else sent_tokenize(text, language)
    131     return [token for sent in sentences
    132             for token in _treebank_word_tokenizer.tokenize(sent)]

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenize__init__.py in sent_tokenize(text, language)
     95     """
     96     tokenizer = load('tokenizers/punkt/{0}.pickle'.format(language))
---> 97     return tokenizer.tokenize(text)
     98 
     99 # Standard word tokenizer.

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in tokenize(self, text, realign_boundaries)
   1233         Given a text, returns a list of the sentences in that text.
   1234         """
-> 1235         return list(self.sentences_from_text(text, realign_boundaries))
   1236 
   1237     def debug_decisions(self, text):

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in sentences_from_text(self, text, realign_boundaries)
   1281         follows the period.
   1282         """
-> 1283         return [text[s:e] for s, e in self.span_tokenize(text, realign_boundaries)]
   1284 
   1285     def _slices_from_text(self, text):

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in span_tokenize(self, text, realign_boundaries)
   1272         if realign_boundaries:
   1273             slices = self._realign_boundaries(text, slices)
-> 1274         return [(sl.start, sl.stop) for sl in slices]
   1275 
   1276     def sentences_from_text(self, text, realign_boundaries=True):

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in <listcomp>(.0)
   1272         if realign_boundaries:
   1273             slices = self._realign_boundaries(text, slices)
-> 1274         return [(sl.start, sl.stop) for sl in slices]
   1275 
   1276     def sentences_from_text(self, text, realign_boundaries=True):

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in _realign_boundaries(self, text, slices)
   1312         """
   1313         realign = 0
-> 1314         for sl1, sl2 in _pair_iter(slices):
   1315             sl1 = slice(sl1.start + realign, sl1.stop)
   1316             if not sl2:

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in _pair_iter(it)
    310     """
    311     it = iter(it)
--> 312     prev = next(it)
    313     for el in it:
    314         yield (prev, el)

C:UsersegagneAppDataLocalContinuumAnaconda3libsite-packagesnltktokenizepunkt.py in _slices_from_text(self, text)
   1285     def _slices_from_text(self, text):
   1286         last_break = 0
-> 1287         for match in self._lang_vars.period_context_re().finditer(text):
   1288             context = match.group() + match.group('after_tok')
   1289             if self.text_contains_sentbreak(context):

TypeError: expected string or bytes-like object

The 5 columns are all character/string (as verified in SQL Server, SAS, and using .select_dtypes(include=[object])).

For good measure I used .to_string() to make sure problem_column is really and truly not anything besides a string, but I continue to get the error. If I process the columns separately good_column1-good_column4 continue to work and problem_column will still generate the error.

I’ve googled around and aside from stripping any numbers from the set (which I can’t do, because those are meaningful) I haven’t found any additional fixes.

Asked By: LMGagne

||

Answers:

Try

from nltk.tokenize import word_tokenize as WordTokenizer

def word_tokenizer(data, col):
    token=[]
    for item in data[col]:
         token.append(WordTokenizer(item))

    return token

token = word_tokenizer(df, column)
df. insert(index, 'token_column', token)
Answered By: KerryChu

This is what got me the desired result.

def custom_tokenize(text):
    if not text:
        print('The text to be tokenized is a None type. Defaulting to blank string.')
        text = ''
    return word_tokenize(text)
df['tokenized_column'] = df.column.apply(custom_tokenize)
Answered By: LMGagne

It might be showing an error because word_tokenize() only accept 1 string at a time. You can loop through the strings and then tokenize it.

For example:

text = "This is the first sentence. This is the second one. And this is the last one."
sentences = sent_tokenize(text)
words = [word_tokenize(sent) for sent in sentences]
print(words)
Answered By: Danish Shaikh

The problem is that you have None (NA) types in your DF. Try this:

df['label'].dropna(inplace=True)
tokens = df['label'].apply(word_tokenize)
Answered By: Ekho

Even though it is already answered. My approach to this problem was the following:

# check for NaN values and remove them (optional)
df["column_name"].dropna(inplace=True)

# convert the column to string
df['column_name'] = df['column_name'].astype(str)

# apply the `word_tokenize()` function
tokens = df['column_name'].apply(word_tokenize)
Answered By: Paschalis Ag

I had the same issue earlier today. I tried df.dropna(),
df.astype(str) then I realized I could simply cast whatever variable
it is into a string.

So You can also perform data type casting on your input text like this:

def stemSentence(sentence): 
   token_words=word_tokenize(str(sentence)) #*this is the cast: str()*
   token_words 
   stem_sentence=[ ] 
   for word in token_words: 
         stem_sentence.append(porter.stem(word)) 
         stem_sentence.append(" ") 
         
   return "".join(stem_sentence) 
Answered By: kojo justine