Import GoogleNews-vectors-negative300.bin


I am working on code using the gensim and having a tough time troubleshooting a ValueError within my code. I finally was able to zip GoogleNews-vectors-negative300.bin.gz file so I could implement it in my model. I also tried gzip which the results were unsuccessful. The error in the code occurs in the last line. I would like to know what can be done to fix the error. Is there any workarounds? Finally, is there a website that I could reference?

Thank you respectfully for your assistance!

import gensim
from keras import backend
from keras.layers import Dense, Input, Lambda, LSTM, TimeDistributed
from keras.layers.merge import concatenate
from keras.layers.embeddings import Embedding
from keras.models import Mode

pretrained_embeddings_path = "GoogleNews-vectors-negative300.bin"
word2vec = 

ValueError                                Traceback (most recent call last)
<ipython-input-3-23bd96c1d6ab> in <module>()
  1 pretrained_embeddings_path = "GoogleNews-vectors-negative300.bin"
----> 2 word2vec = 

C:UsersgreenAnaconda3envspy35libsite- in load_word2vec_format(cls, fname, 
fvocab, binary, encoding, unicode_errors, limit, datatype)
244                             word.append(ch)
245                     word = utils.to_unicode(b''.join(word), 
encoding=encoding, errors=unicode_errors)
--> 246                     weights = fromstring(, 
247                     add_word(word, weights)
248             else:

ValueError: string size must be a multiple of element size
Asked By: Green



you have to write the complete path.

use this path:

Answered By: user8403237

Edit: The S3 url has stopped working. You can download the data from Kaggle or use this Google Drive link (be careful downloading files from Google Drive).

The below commands no longer work work.

brew install wget

wget -c ""

This downloads the GZIP compressed file that you can uncompress using:

gzip -d GoogleNews-vectors-negative300.bin.gz

You can then use the below command to get wordVector.

from gensim import models

w = models.KeyedVectors.load_word2vec_format(
    '../GoogleNews-vectors-negative300.bin', binary=True)
Answered By: ohsoifelse

try this –

import gensim.downloader as api

wv = api.load('word2vec-google-news-300')

vec_king = wv['king']

also, visit this link :

Answered By: hansrajswapnil

Here is what worked for me. I loaded a part of the model and not the entire model as it’s huge.

!pip install wget

import wget
url = ''
filename =

f_in ='GoogleNews-vectors-negative300.bin.gz', 'rb')
f_out = open('GoogleNews-vectors-negative300.bin', 'wb')

import gensim
from gensim.models import Word2Vec, KeyedVectors
from sklearn.decomposition import PCA

model = KeyedVectors.load_word2vec_format('GoogleNews-vectors-negative300.bin', binary=True, limit=100000)
Answered By: Anjana

You can use this URL that points to Google Drive’s download of the bin.gz file:

Alternative mirrors (including the S3 mentioned here) seem to be broken.

Answered By: Piotr Rusin

Also available from figshare:

wget -O GoogleNews-vectors-negative300.bin

Answered By: Maciej S.
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.