Python: implementing feature vectors efficiently

Question:

I’m implementing feature vectors as bit maps for documents in a corpus. I already have the vocabulary for the entire corpus (as a list/set) and a list of the terms in each document.

For example, if the corpus vocabulary is ['a', 'b', 'c', 'd'] and the terms in document d1 is ['a', 'b', 'd', 'd'], the feature vector for d1 should be [1, 1, 0, 2].

To generate the feature vector, I’d iterate over the corpus vocabulary and check if each term is in the list of document terms, then set the bit in the correct position in the document’s feature vector.

What would be the most efficient way to implement this? Here are some things I’ve considered:

  • Using a set would make checking vocab membership very efficient but sets have no ordering, and the feature vector bits need to be in the order of the sorted corpus vocabulary.
  • Using a dict for the corpus vocab (mapping each vocab term to an arbitrary value, like 1) would allow iteration over sorted(dict.keys()) so I could keep track of the index. However, I’d have the space overhead of dict.values().
  • Using a sorted(list) would be inefficient to check membership.

What would StackOverflow suggest?

Asked By: yavoh

||

Answers:

I think the most efficient way is to loop over each document’s terms, get the position of the term in the (sorted) corpus and set the bit accordingly.

The sorted list of corpus terms can be stored as dictionary with term -> index mapping (basically an inverted index).

You can create it like so:

corpus = dict(((term, index) for index, term in enumerate(sorted(all_words))))

For each document you’d have to generate a list of 0‘s as feature vector:

num_words = len(corpus)
fvs = [[0]*num_words for _ in docs]

Then building the feature vectors would be:

for i, doc_terms in enumerate(docs):
    fv = fvs[i]
    for term in doc_terms:
        fv[corpus[term]] += 1

There is no overhead in testing membership, you just have to loop over all terms of all documents.


That all said, depending on the size of the corpus, you should have a look at numpy and scipy. It is likely that you will run into memory problems and scipy provides special datatypes for sparse matrices (instead of using a list of lists) which can save a lot of memory.
You can use the same approach as shown above, but instead of adding numbers to list elements, you add it to matrix elements (e.g. the rows will be the documents and the columns the terms of the corpus).

You can also make use of some matrix operations provided by numpy if you want to apply local or global weighting schemes.

I hope this gets you started 🙂

Answered By: Felix Kling
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.