Tensorflow: How to get a tensor by name?

Question:

I’m having trouble recovering a tensor by name, I don’t even know if it’s possible.

I have a function that creates my graph:

def create_structure(tf, x, input_size,dropout):    
 with tf.variable_scope("scale_1") as scope:
  W_S1_conv1 = deep_dive.weight_variable_scaling([7,7,3,64], name='W_S1_conv1')
  b_S1_conv1 = deep_dive.bias_variable([64])
  S1_conv1 = tf.nn.relu(deep_dive.conv2d(x_image, W_S1_conv1,strides=[1, 2, 2, 1], padding='SAME') + b_S1_conv1, name="Scale1_first_relu")
.
.
.
return S3_conv1,regularizer

I want to access the variable S1_conv1 outside this function. I tried:

with tf.variable_scope('scale_1') as scope_conv: 
 tf.get_variable_scope().reuse_variables()
 ft=tf.get_variable('Scale1_first_relu')

But that is giving me an error:

ValueError: Under-sharing: Variable scale_1/Scale1_first_relu does not exist, disallowed. Did you mean to set reuse=None in VarScope?

But this works:

with tf.variable_scope('scale_1') as scope_conv: 
 tf.get_variable_scope().reuse_variables()
 ft=tf.get_variable('W_S1_conv1')

I can get around this with

return S3_conv1,regularizer, S1_conv1

but I don’t want to do that.

I think my problem is that S1_conv1 is not really a variable, it’s just a tensor. Is there a way to do what I want?

Asked By: protas

||

Answers:

All tensors have string names which you can see as follows

[tensor.name for tensor in tf.get_default_graph().as_graph_def().node]

Once you know the name you can fetch the Tensor using <name>:0 (0 refers to endpoint which is somewhat redundant)

For instance if you do this

tf.constant(1)+tf.constant(2)

You have the following Tensor names

[u'Const', u'Const_1', u'add']

So you can fetch output of addition as

sess.run('add:0')

Note, this is part not part of public API. Automatically generated string tensor names are an implementation detail and may change.

Answered By: Yaroslav Bulatov

There is a function tf.Graph.get_tensor_by_name(). For instance:

import tensorflow as tf

c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
e = tf.matmul(c, d, name='example')

with tf.Session() as sess:
    test =  sess.run(e)
    print e.name #example:0
    test = tf.get_default_graph().get_tensor_by_name("example:0")
    print test #Tensor("example:0", shape=(2, 2), dtype=float32)
Answered By: apfalz

All you gotta do in this case is:

ft=tf.get_variable('scale1/Scale1_first_relu:0')
Answered By: Kislay Kunal

Or simpler still, infer it from the corresponding .pbtxt file that comes with the model .pb file. Since it depends on the model, every case is different.

Answered By: Kris Stern
Categories: questions Tags: , ,
Answers are sorted by their score. The answer accepted by the question owner as the best is marked with
at the top-right corner.