What is the difference between np.mean and tf.reduce_mean?

Question:

In the MNIST beginner tutorial, there is the statement

accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

tf.cast basically changes the type of tensor the object is, but what is the difference between tf.reduce_mean and np.mean?

Here is the doc on tf.reduce_mean:

reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)

input_tensor: The tensor to reduce. Should have numeric type.

reduction_indices: The dimensions to reduce. If None (the defaut), reduces all dimensions.

# 'x' is [[1., 1. ]]
#         [2., 2.]]
tf.reduce_mean(x) ==> 1.5
tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1.,  2.]

For a 1D vector, it looks like np.mean == tf.reduce_mean, but I don’t understand what’s happening in tf.reduce_mean(x, 1) ==> [1., 2.]. tf.reduce_mean(x, 0) ==> [1.5, 1.5] kind of makes sense, since mean of [1, 2] and [1, 2] is [1.5, 1.5], but what’s going on with tf.reduce_mean(x, 1)?

Asked By: O.rka

||

Answers:

The functionality of numpy.mean and tensorflow.reduce_mean are the same. They do the same thing. From the documentation, for numpy and tensorflow, you can see that. Lets look at an example,

c = np.array([[3.,4], [5.,6], [6.,7]])
print(np.mean(c,1))

Mean = tf.reduce_mean(c,1)
with tf.Session() as sess:
    result = sess.run(Mean)
    print(result)

Output

[ 3.5  5.5  6.5]
[ 3.5  5.5  6.5]

Here you can see that when axis(numpy) or reduction_indices(tensorflow) is 1, it computes mean across (3,4) and (5,6) and (6,7), so 1 defines across which axis the mean is computed. When it is 0, the mean is computed across(3,5,6) and (4,6,7), and so on. I hope you get the idea.

Now what are the differences between them?

You can compute the numpy operation anywhere on python. But in order to do a tensorflow operation, it must be done inside a tensorflow Session. You can read more about it here. So when you need to perform any computation for your tensorflow graph(or structure if you will), it must be done inside a tensorflow Session.

Lets look at another example.

npMean = np.mean(c)
print(npMean+1)

tfMean = tf.reduce_mean(c)
Add = tfMean + 1
with tf.Session() as sess:
    result = sess.run(Add)
    print(result)

We could increase mean by 1 in numpy as you would naturally, but in order to do it in tensorflow, you need to perform that in Session, without using Session you can’t do that. In other words, when you are computing tfMean = tf.reduce_mean(c), tensorflow doesn’t compute it then. It only computes that in a Session. But numpy computes that instantly, when you write np.mean().

I hope it makes sense.

Answered By: Shubhashis

The new documentation states that tf.reduce_mean() produces the same results as np.mean:

Equivalent to np.mean

It also has absolutely the same parameters as np.mean. But here is an important difference: they produce the same results only on float values:

import tensorflow as tf
import numpy as np
from random import randint

num_dims = 10
rand_dim = randint(0, num_dims - 1)
c = np.random.randint(50, size=tuple([5] * num_dims)).astype(float)

with tf.Session() as sess:
    r1 = sess.run(tf.reduce_mean(c, rand_dim))
    r2 = np.mean(c, rand_dim)
    is_equal = np.array_equal(r1, r2)
    print is_equal
    if not is_equal:
        print r1
        print r2

If you will remove type conversion, you will see different results


In additional to this, many other tf.reduce_ functions such as reduce_all, reduce_any, reduce_min, reduce_max, reduce_prod produce the same values as there numpy analogs. Clearly because they are operations, they can be executed only from inside of the session.

Answered By: Salvador Dali

The key here is the word reduce, a concept from functional programming, which makes it possible for reduce_mean in TensorFlow to keep a running average of the results of computations from a batch of inputs.

If you are not familiar with functional programming, this can seem mysterious. So first let us see what reduce does. If you were given a list like [1,2,5,4] and were told to compute the mean, that is easy – just pass the whole array to np.mean and you get the mean. However what if you had to compute the mean of a stream of numbers? In that case, you would have to first assemble the array by reading from the stream and then call np.mean on the resulting array – you would have to write some more code.

An alternative is to use the reduce paradigm. As an example, look at how we can use reduce in python to calculate the sum of numbers:
reduce(lambda x,y: x+y, [1,2,5,4]).

It works like this:

  1. Step 1: Read 2 digits from the list – 1,2. Evaluate lambda 1,2. reduce stores the result 3. Note – this is the only step where 2 digits are read off the list
  2. Step 2: Read the next digit from the list – 5. Evaluate lambda 5, 3 (3 being the result from step 1, that reduce stored). reduce stores the result 8.
  3. Step 3: Read the next digit from the list – 4. Evaluate lambda 8,4 (8 being the result of step 2, that reduce stored). reduce stores the result 12
  4. Step 4: Read the next digit from the list – there are none, so return the stored result of 12.

Read more here Functional Programming in Python

To see how this applies to TensorFlow, look at the following block of code, which defines a simple graph, that takes in a float and computes the mean. The input to the graph however is not a single float but an array of floats. The reduce_mean computes the mean value over all those floats.

import tensorflow as tf


inp = tf.placeholder(tf.float32)
mean = tf.reduce_mean(inp)

x = [1,2,3,4,5]

with tf.Session() as sess:
    print(mean.eval(feed_dict={inp : x}))

This pattern comes in handy when computing values over batches of images. Look at The Deep MNIST Example where you see code like:

correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
Answered By: Nikhil George