Python: Calculating the accuracy of a neural network using TensorFlow

Question:

I am using TensorFlow and I have 2 tensors prediction and label where the label isn’t one hot. How do I work out the accuracy of my prediction? I tried using tf.metrics.accuracy and tf.metrics.auc but both returned [0, 0] This is my neural network:

from tensorflow.keras.utils import to_categorical
import tensorflow.compat.v1 as tf
from keras.datasets import mnist
from random import randint
import numpy as np
import math

tf.disable_eager_execution()

class AICore:
    def __init__(self, nodes_in_each_layer):
        self.data_in_placeholder = tf.placeholder("float", [None, nodes_in_each_layer[0]])
        self.data_out_placeholder = tf.placeholder("float")
        self.init_neural_network(nodes_in_each_layer)

    def init_neural_network(self, n_nodes_h):
        #n_nodes_h constains the number of nodes for each layer
        #n_nodes_h[0] = number of inputs
        #n_nodes_h[-1] = number of outputs
        self.layers = [None for i in range(len(n_nodes_h)-1)]
        for i in range(1, len(n_nodes_h)):
            self.layers[i-1] = {"weights":tf.Variable(tf.random_normal([n_nodes_h[i-1], n_nodes_h[i]])),
                                "biases":tf.Variable(tf.random_normal([n_nodes_h[i]]))}

    def neural_network_model(self, data):
        for i in range(len(self.layers)):
            data = tf.matmul(data, self.layers[i]["weights"]) + self.layers[i]["biases"]
            if i != len(self.layers)-1:
                data = tf.nn.relu(data)
        return data

    def train_neural_network(self, data, epochs, batch_size):
        prediction = self.neural_network_model(self.data_in_placeholder)
        cost = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(logits=prediction, labels=self.data_out_placeholder))
        accuracy = ???
        optimiser = tf.train.AdamOptimizer().minimize(cost)
        with tf.compat.v1.Session() as sess:
            sess.run(tf.global_variables_initializer())
            sess.run(tf.local_variables_initializer())
            for epoch in range(epochs):
                data.reset_epoch()
                epoch_cost = 0
                for _ in range(len(data)//batch_size):
                    x, y = data.next_batch(batch_size)
                    feed_dict = {self.data_in_placeholder:x, 
                                 self.data_out_placeholder:y}
                    _, c = sess.run([optimiser, cost], feed_dict=feed_dict)
                    epoch_cost += c
                print("epoch_cost =", epoch_cost)
                print("accuracy =", ???)

class Data:
    def __init__(self):
        (self.x_train, self.y_train), (self.x_test, self.y_test) = mnist.load_data()
        self.idx = 0

    def __len__(self):
        return len(self.x_train)

    def next_batch(self, batch_size):
        new_idx = self.idx+batch_size
        x = self.x_train[self.idx:new_idx]
        y = self.y_train[self.idx:new_idx]
        assert x.shape[0] == batch_size, "ran out of data"
        self.idx = new_idx
        # flatten(x), onehot_encode(y)
        return x.reshape([batch_size,self.mult(x.shape[1:])]), to_categorical(y, 10)

    def reset_epoch(self):
        self.idx = 0

    def mult(self, _list):
        # return product of list elements
        from functools import reduce
        from operator import mul
        return reduce(mul, _list, 1)


n_nodes_h = [784, 500, 500, 500, 10]
batch_size = 100
epochs = 10

data_generator = Data()

core = AICore(n_nodes_h)
core.train_neural_network(data_generator, epochs, batch_size)

but I have no idea how to calculate the accuracy as a percentage.

Asked By: TheLizzard

||

Answers:

For such a requirement, Sensitivity is a good metric (sensitivity basically represents how good the model is at detecting accuracy e.g. positives/frauds). There are some open-source python projects those will help you to move forward:Visit reference: sensitivity-analysis.

Sensitivity can be calculated using the confusion matrix of your predictions such as:

from sklearn.metrics import confusion_matrix

A confusion matrix is basically a representation of your original distribution vs your predicted distribution. The sensitivity can then be calculated using a very simple formula on this matrix. You can learn and practice about Confusion matrix in detail:Visit reference: confusion-matrix.

I have performed an analysis on a data-set e.g. test-bits, to calculate Accuracy, sensitivity, and specificity, you can learn in detail: Visit reference: calculate-sensitivity-specifity-of-neural-network.

#Confusion matrix, Accuracy, sensitivity, and specificity
from sklearn.metrics import confusion_matrix


cm1 = confusion_matrix(test_df[['test']],predicted_class1)
print('Confusion Matrix :', cm1)

Confusion Matrix : [ [37767 4374] [30521 27338] ]

Then to calculate the required parameters:

total1=sum(sum(cm1))
#####from confusion matrix calculate accuracy
accuracy1=(cm1[0,0]+cm1[1,1])/total1
print ('Accuracy : ', accuracy1)

sensitivity1 = cm1[0,0]/(cm1[0,0]+cm1[0,1])
print('Sensitivity : ', sensitivity1 )

specificity1 = cm1[1,1]/(cm1[1,0]+cm1[1,1])
print('Specificity : ', specificity1)

Accuracy : 0.65105

Sensitivity : 0.896205595501

Specificity : 0.472493475518

Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.