Error while embedding: could not convert string to float: 'ng'
Question:
I am working on Pre trained word vectors using GloVe method. Data contains vectors on Wikipedia data. While embedding data i am getting error stating that could not convert string to float: ‘ng’
I tried going through data but there i was not able to find symbol ‘ng’
# load embedding as a dict
def load_embedding(filename):
# load embedding into memory, skip first line
file = open(filename,'r', errors = 'ignore')
# create a map of words to vectors
embedding = dict()
for line in file:
parts = line.split()
# key is string word, value is numpy array for vector
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
file.close()
return embedding
Here is the error report. Please guide me further.
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
C:UsersAKSHAYAppDataLocalcondacondaenvspy355libsite-packagesh5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
File "<ipython-input-1-d91aa5ebf9f8>", line 1, in <module>
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
File "C:UsersAKSHAYAppDataLocalcondacondaenvspy355libsite-packagesspyderutilssitesitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:UsersAKSHAYAppDataLocalcondacondaenvspy355libsite-packagesspyderutilssitesitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 123, in <module>
raw_embedding = load_embedding('glove.6B.50d.txt')
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 67, in load_embedding
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
ValueError: could not convert string to float: 'ng'
Answers:
Looks like ‘ng’ is a word (token) in your file that you are trying to get a word vector for. Glove pre-trained vectors probably do not have a vector for ‘ng’ which is causing the error. So, you need to check if the word has a vector in the Glove embeddings. See the section labeled ‘Create a weight matrix for words in training docs’ in this post for an example of how to do this – Text Classification Using CNN, LSTM and Pre-trained Glove Word Embeddings: Part-3
ValueError: could not convert string to float: ‘ng’
For addressing the problem above, add encoding=’utf8′ to the function as follows:
file = open(filename,'r', errors = 'ignore', encoding='utf8')
This seems to work fine:
embedding_model = {}
f = open(r'dataset/glove.840B.300d.txt', encoding="utf8", "r")
for line in f:
values = line.split()
word = ''.join(values[:-300])
coefs = np.asarray(values[-300:], dtype='float32')
embedding_model[word] = coefs
f.close()
You can do like this when using file glove.840B.300d.txt:
embedding_dict = {}
with open('glove.840B.300d.txt','r') as f:
for line in f:
values = line.split()
word = ''.join(values[:-300])
vectors = np.asarray(values[-300:], dtype='float32')
embedding_dict[word] = vectors
f.close()
I am working on Pre trained word vectors using GloVe method. Data contains vectors on Wikipedia data. While embedding data i am getting error stating that could not convert string to float: ‘ng’
I tried going through data but there i was not able to find symbol ‘ng’
# load embedding as a dict
def load_embedding(filename):
# load embedding into memory, skip first line
file = open(filename,'r', errors = 'ignore')
# create a map of words to vectors
embedding = dict()
for line in file:
parts = line.split()
# key is string word, value is numpy array for vector
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
file.close()
return embedding
Here is the error report. Please guide me further.
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
C:UsersAKSHAYAppDataLocalcondacondaenvspy355libsite-packagesh5py__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
Using TensorFlow backend.
Traceback (most recent call last):
File "<ipython-input-1-d91aa5ebf9f8>", line 1, in <module>
runfile('C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py', wdir='C:/Users/AKSHAY/Desktop/NLP')
File "C:UsersAKSHAYAppDataLocalcondacondaenvspy355libsite-packagesspyderutilssitesitecustomize.py", line 705, in runfile
execfile(filename, namespace)
File "C:UsersAKSHAYAppDataLocalcondacondaenvspy355libsite-packagesspyderutilssitesitecustomize.py", line 102, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 123, in <module>
raw_embedding = load_embedding('glove.6B.50d.txt')
File "C:/Users/AKSHAY/Desktop/NLP/Pre-trained GloVe.py", line 67, in load_embedding
embedding[parts[0]] = np.array(parts[1:], dtype='float32')
ValueError: could not convert string to float: 'ng'
Looks like ‘ng’ is a word (token) in your file that you are trying to get a word vector for. Glove pre-trained vectors probably do not have a vector for ‘ng’ which is causing the error. So, you need to check if the word has a vector in the Glove embeddings. See the section labeled ‘Create a weight matrix for words in training docs’ in this post for an example of how to do this – Text Classification Using CNN, LSTM and Pre-trained Glove Word Embeddings: Part-3
ValueError: could not convert string to float: ‘ng’
For addressing the problem above, add encoding=’utf8′ to the function as follows:
file = open(filename,'r', errors = 'ignore', encoding='utf8')
This seems to work fine:
embedding_model = {}
f = open(r'dataset/glove.840B.300d.txt', encoding="utf8", "r")
for line in f:
values = line.split()
word = ''.join(values[:-300])
coefs = np.asarray(values[-300:], dtype='float32')
embedding_model[word] = coefs
f.close()
You can do like this when using file glove.840B.300d.txt:
embedding_dict = {}
with open('glove.840B.300d.txt','r') as f:
for line in f:
values = line.split()
word = ''.join(values[:-300])
vectors = np.asarray(values[-300:], dtype='float32')
embedding_dict[word] = vectors
f.close()