Why do we name variables in Tensorflow?

Question:

In some of the places, I saw the syntax, where variables are initialized with names, sometimes without names. For example:

# With name
var = tf.Variable(0, name="counter")

# Without
one = tf.constant(1)

What is the point of naming the variable var "counter"?

Asked By: randomizer

||

Answers:

The name parameter is optional (you can create variables and constants with or without it), and the variable you use in your program does not depend on it. Names can be helpful in a couple of places:

When you want to save or restore your variables (you can save them to a binary file after the computation). From docs:

By default, it uses the value of the Variable.name property for each
variable

matrix_1 = tf.Variable([[1, 2], [2, 3]], name="v1")
matrix_2 = tf.Variable([[3, 4], [5, 6]], name="v2")
init = tf.initialize_all_variables()

saver = tf.train.Saver()

sess = tf.Session()
sess.run(init)
save_path = saver.save(sess, "/model.ckpt")
sess.close()

Nonetheless you have variables matrix_1, matrix_2 they are saves as v1, v2 in the file.

Also names are used in TensorBoard to nicely show names of edges. You can even group them by using the same scope:

import tensorflow as tf

with tf.name_scope('hidden') as scope:
  a = tf.constant(5, name='alpha')
  W = tf.Variable(tf.random_uniform([1, 2], -1.0, 1.0), name='weights')
  b = tf.Variable(tf.zeros([1]), name='biases')
Answered By: Salvador Dali

You can imagine Python namespace and TensorFlow namespace as two parallel universes. Names in TensorFlow space are actually the “real” attributes belonging to any TensorFlow variables, while names in Python space are just temporary pointers pointing to TensorFlow variables during this run of your script. That is the reason why when saving and restoring variables, only TensorFlow names are used, because the Python namespace no longer exists after script being terminated, but Tensorflow namespace is still there in your saved files.

Answered By: Lifu Huang

Consider the following use case code and its output

def f():
    a = tf.Variable(np.random.normal(), dtype = tf.float32, name = 'test123')

def run123():
    f()
    init = tf.global_variables_initializer()
    with tf.Session() as sess123:
        sess123.run(init)
        print(sess123.run(fetches = ['test123:0']))
        print(sess123.run(fetches = [a]))

run123()

output:

[0.10108799]

NameError Traceback (most recent call
last) in ()
10 print(sess123.run(fetches = [a]))
11
—> 12 run123()

in run123()
8 sess123.run(init)
9 print(sess123.run(fetches = [‘test123:0’]))
—> 10 print(sess123.run(fetches = [a]))
11
12 run123()

NameError: name ‘a’ is not defined

The ‘a’, as defined in the scope of f(), not available outside of its scope, i.e in run123(). But the default graph has to refer to them with something, so that the default graph can be referenced to, as needed, across various scopes and that is when its name comes handy.

Answered By: MiloMinderbinder

In fact, from the aspect to distinguish different variables, we totally can use the python name (the left part of the assignment sign, and we call the name as python name to avoid confusion. such as v in the following example) to name the variables. However, in the programming process, we usually rebind the python name to other objects (i.e., the op in Tensorflow), for example,

v = tf.get_variable("v1", [3], initializer = tf.zeros_initializer)
v = tf.get_variable("v2", [5], initializer = tf.zeros_initializer)

Firstly, the python name v bind the tensor form first line (tf.get_variable("v1", [3], initializer = tf.zeros_initializer)). Then,the v rebind the tensor from the second line (tf.get_variable("v2", [5], initializer = tf.zeros_initializer)) and didn’t bind the first tensor anymore. If we didn’t give the tensorflow attribute name v1 and v2, how can we identify the tensor from the first line?

Answered By: pangdan
Categories: questions Tags: ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.