Compute accuracy with tensorflow 1

Question:

Below you can see a code to build a network. With probs = tf.nn.softmax(logits), I am getting probabilities:

def build_network_test(input_images, labels, num_classes):
    logits = embedding_model(input_images, train_phase=True)
    logits = fully_connected(logits, num_classes, activation_fn=None,
                             scope='tmp')

    with tf.variable_scope('loss') as scope:
        with tf.name_scope('soft_loss'):
            softmax = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels))
            probs = tf.nn.softmax(logits)
        scope.reuse_variables()
    with tf.name_scope('acc'):
        accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.argmax(logits, 1), labels), tf.float32))

    with tf.name_scope('loss/'):
        tf.summary.scalar('TotalLoss', softmax)

    return logits, softmax, accuracy,probs  # returns total loss

In addition, I am computing accuracy and loss with following code snippet:

for idx in range(num_of_batches):
    batch_images, batch_labels = get_batch(idx, FLAGS.batch_size, mm_labels, mm_data)
    _, summary_str, train_batch_acc, train_batch_loss, probabilities_1 = sess.run(
        [train_op, summary_op, accuracy, total_loss, probs],
        feed_dict={
            input_images: batch_images - mean_data_img_train,
            labels: batch_labels,
        })

    train_acc += train_batch_acc
    train_loss += train_batch_loss

train_acc /= num_of_batches
train_acc = train_acc * 100

My question:

I am getting probabilities with two feature values. Afterwards, I am averaging these probabilities with following code

mvalue = np.mean(np.array([probabilities_1, probabilities_2]), axis=0)

Now, I want to compute accuracy on mvalue. Can someone give me pointers on how to do it?

What I had done so far

tmp = tf.argmax(input=mvalue, axis=1)
an_array = tmp.eval(session=tf.compat.v1.Session())

It gives me predicated labels however, I want to have an accuracy value.

Asked By: cswah

||

Answers:

What you have done so far is good…, Hopefully, if I understood then you can find the mean accuracy easily…, by tf.compat.v1.keras.metrics.categorical_accuracy() So, I am putting a dummy code in your situation hope this will make some help…

probabilities_1 = tf.constant([[0.5 , 0.1]])
probabilities_2 = tf.constant([[0.1 , 0.3]])

mvalue = np.mean(np.array([probabilities_1, probabilities_2]), axis=0)

tmp = tf.argmax(input=mvalue, axis=1)
#here, this y_true is your label and tmp is your y_pred your logits
y_true = tf.constant([[0]])

tf.compat.v1.keras.metrics.categorical_accuracy(y_true, tmp)
<tf.Tensor: shape=(1,), dtype=float32, numpy=array([1.], dtype=float32)>
Answered By: Mohammad Ahmed

There are two methods to compute accuracy in given scenario. Both will yield same results:

Method 1

If I am correct, you will have to run code snippet two x2 times to get values for probabilities 1 and probabilities 2. Moreover, there will be 2 individual accuracy values for each input.

Now, lets combine these probabilities:

mvalue = np.mean(np.array([probabilities_1, probabilities_2]), axis=0)

Next:

# y_hat
predicted_labels = tf.argmax(mvalue, 1)
# Of course in tf1 you have to run a Session to get values from tensors.
m_preds = predicted_labels.eval(session=tf.compat.v1.Session())

# Now computing accuracy is straight-forward.
from sklearn import metrics
accuracy = metrics.accuracy_score(y_true, m_preds)

Method 2
It seems like you are also returning logits from build_network_test function. In your main code, you can also compute accuracy as:

mlogits = np.mean(np.array([logits_1, logits_2]), axis=0)
m_probs = tf.nn.softmax(mlogits)
m_preds = tf.argmax(m_probs, 1)
m_preds_value = m_preds.eval(session=tf.compat.v1.Session())

# Compute accuracy
from sklearn import metrics
accuracy = metrics.accuracy_score(y_true, m_preds_value)
Answered By: saad_saeed
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.