How do I find Wally with Python?

Question:

Shamelessly jumping on the bandwagon 🙂

Inspired by How do I find Waldo with Mathematica and the followup How to find Waldo with R, as a new python user I’d love to see how this could be done. It seems that python would be better suited to this than R, and we don’t have to worry about licenses as we would with Mathematica or Matlab.

In an example like the one below obviously simply using stripes wouldn’t work. It would be interesting if a simple rule based approach could be made to work for difficult examples such as this.

At the beach

I’ve added the [machine-learning] tag as I believe the correct answer will have to use ML techniques, such as the Restricted Boltzmann Machine (RBM) approach advocated by Gregory Klopper in the original thread. There is some RBM code available in python which might be a good place to start, but obviously training data is needed for that approach.

At the 2009 IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING (MLSP 2009) they ran a Data Analysis Competition: Where’s Wally?. Training data is provided in matlab format. Note that the links on that website are dead, but the data (along with the source of an approach taken by Sean McLoone and colleagues can be found here (see SCM link). Seems like one place to start.

Asked By: tdc

||

Answers:

You could try template matching, and then taking down which produced the highest resemblance, and then using machine learning to narrow it more. That is also very difficult, and with the accuracy of template matching, it may just return every face or face-like image. I am thinking you will need more than just machine learning if you hope to do this consistently.

Answered By: a sandwhich

maybe you should start with breaking the problem into two smaller ones:

  1. create an algorithm that separates people from the background.
  2. train a neural network classifier with as many positive and negative examples as possible.

those are still two very big problems to tackle…

BTW, I would choose c++ and open CV, it seems much more suited for this.

Answered By: WeaselFox

This is not impossible but very difficult because you really have no example of a successful match. There are often multiple states(in this case, more examples of find walleys drawings), you can then feed multiple pictures into an image reconization program and treat it as a hidden markov model and use something like the viterbi algorithm for inference ( http://en.wikipedia.org/wiki/Viterbi_algorithm ).

Thats the way I would approach it, but assuming you have multiple images that you can give it examples of the correct answer so it can learn. If you only have one picture, then I’m sorry there maybe another approach you need to take.

Answered By: Lostsoul

I recognized that there are two main features which are almost always visible:

  1. the red-white striped shirt
  2. dark brown hair under the fancy cap

So I would do it the following way:

search for striped shirts:

  • filter out red and white color (with thresholds on the HSV converted image). That gives you two mask images.
  • add them together -> that’s the main mask for searching striped shirts.
  • create a new image with all the filtered out red converted to pure red (#FF0000) and all the filtered out white converted to pure white (#FFFFFF).
  • now correlate this pure red-white image with a stripe pattern image (i think all the waldo’s have quite perfect horizontal stripes, so rotation of the pattern shouldn’t be necessary). Do the correlation only inside the above mentioned main mask.
  • try to group together clusters which could have been resulted from one shirt.

If there are more than one ‘shirts’, to say, more than one clusters of positive correlation, search for other features, like the dark brown hair:

search for brown hair

  • filter out the specific brown hair color using the HSV converted image and some thresholds.
  • search for a certain area in this masked image – not too big and not too small.
  • now search for a ‘hair area’ that is just above a (before) detected striped shirt and has a certain distance to the center of the shirt.
Answered By: Cyril

Here’s an implementation with mahotas

from pylab import imshow
import numpy as np
import mahotas
wally = mahotas.imread('DepartmentStore.jpg')

wfloat = wally.astype(float)
r,g,b = wfloat.transpose((2,0,1))

Split into red, green, and blue channels. It’s better to use floating point arithmetic below, so we convert at the top.

w = wfloat.mean(2)

w is the white channel.

pattern = np.ones((24,16), float)
for i in xrange(2):
    pattern[i::4] = -1

Build up a pattern of +1,+1,-1,-1 on the vertical axis. This is wally’s shirt.

v = mahotas.convolve(r-w, pattern)

Convolve with red minus white. This will give a strong response where the shirt is.

mask = (v == v.max())
mask = mahotas.dilate(mask, np.ones((48,24)))

Look for the maximum value and dilate it to make it visible. Now, we tone down the whole image, except the region or interest:

wally -= .8*wally * ~mask[:,:,None]
imshow(wally)

And we get waldo!

Answered By: luispedro

Here’s a solution using neural networks that works nicely.

The neural network is trained on several solved examples that are marked with bounding boxes indicating where Wally appears in the picture. The goal of the network is to minimize the error between the predicted box and the actual box from training/validation data.

The network above uses Tensorflow Object Detection API to perform training and predictions.

Answered By: Tadej Magajna