Tensorflow Slim: TypeError: Expected int32, got list containing Tensors of type '_Message' instead

Question:

I am following this tutorial for learning TensorFlow Slim but upon running the following code for Inception:

import numpy as np
import os
import tensorflow as tf
import urllib2

from datasets import imagenet
from nets import inception
from preprocessing import inception_preprocessing

slim = tf.contrib.slim

batch_size = 3
image_size = inception.inception_v1.default_image_size
checkpoints_dir = '/tmp/checkpoints/'
with tf.Graph().as_default():
    url = 'https://upload.wikimedia.org/wikipedia/commons/7/70/EnglishCockerSpaniel_simon.jpg'
    image_string = urllib2.urlopen(url).read()
    image = tf.image.decode_jpeg(image_string, channels=3)
    processed_image = inception_preprocessing.preprocess_image(image, image_size, image_size, is_training=False)
    processed_images  = tf.expand_dims(processed_image, 0)

    # Create the model, use the default arg scope to configure the batch norm parameters.
    with slim.arg_scope(inception.inception_v1_arg_scope()):
        logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
    probabilities = tf.nn.softmax(logits)

    init_fn = slim.assign_from_checkpoint_fn(
        os.path.join(checkpoints_dir, 'inception_v1.ckpt'),
        slim.get_model_variables('InceptionV1'))

    with tf.Session() as sess:
        init_fn(sess)
        np_image, probabilities = sess.run([image, probabilities])
        probabilities = probabilities[0, 0:]
        sorted_inds = [i[0] for i in sorted(enumerate(-probabilities), key=lambda x:x[1])]

    plt.figure()
    plt.imshow(np_image.astype(np.uint8))
    plt.axis('off')
    plt.show()

    names = imagenet.create_readable_names_for_imagenet_labels()
    for i in range(5):
        index = sorted_inds[i]
        print('Probability %0.2f%% => [%s]' % (probabilities[index], names[index]))

I seem to be getting this set of errors:

Traceback (most recent call last):
  File "DA_test_pred.py", line 24, in <module>
    logits, _ = inception.inception_v1(processed_images, num_classes=1001, is_training=False)
  File "/home/deepankar1994/Desktop/MTP/TensorFlowEx/TFSlim/models/slim/nets/inception_v1.py", line 290, in inception_v1
    net, end_points = inception_v1_base(inputs, scope=scope)
  File "/home/deepankar1994/Desktop/MTP/TensorFlowEx/TFSlim/models/slim/nets/inception_v1.py", line 96, in inception_v1_base
    net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1053, in concat
    dtype=dtypes.int32).get_shape(
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 651, in convert_to_tensor
    as_ref=False)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 716, in internal_convert_to_tensor
    ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 176, in _constant_tensor_conversion_function
    return constant(v, dtype=dtype, name=name)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/constant_op.py", line 165, in constant
    tensor_util.make_tensor_proto(value, dtype=dtype, shape=shape, verify_shape=verify_shape))
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 367, in make_tensor_proto
    _AssertCompatible(values, dtype)
  File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/tensor_util.py", line 302, in _AssertCompatible
    (dtype.name, repr(mismatch), type(mismatch).__name__))
TypeError: Expected int32, got list containing Tensors of type '_Message' instead.

This is strange because all of this code is from their official guide. I am new to TF and any help would be appreciated.

Asked By: random40154443

||

Answers:

I got the same problem when using the 1.0 released and I could make it work without having to roll back on a previous version.

The problem is caused by change in the api. That discussion helped me to find the solution: Google group >
Recent API Changes in TensorFlow

You just have to update all the line with tf.concat

for example

net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])

should be changed to

net = tf.concat([branch_0, branch_1, branch_2, branch_3], 3)

Note:

I was able to use the models without problem. But I still got error afterward when wanting to load the pretrained weight.
Seems that the slim module got several changed since they made the checkpoint file. The graph created by the code and the one present in the checkpoint file were different.

Note2:

I was able to use the pretrain weights for inception_resnet_v2 by adding to all conv2d layer biases_initializer=None

Answered By: rAyyy

I got same error when I did the work.

I found that

logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b)
loss = tf.reduce_mean(
  tf.nn.softmax_cross_entropy_with_logits(
    labels=tf.concat(train_labels, 0), logits=logits))

The output is shape=(10, 64, 64).

The code want concat outputs[0] to outputs[9] => get a new shape(640,64).

But the “tf.concat” API may not allow to do this.

(train_labels same to this)

So I write to

A = tf.concat(0,[outputs[0],outputs[1]])
A = tf.concat(0,[A,outputs[2]])
A = tf.concat(0,[A,outputs[3]])
A = tf.concat(0,[A,outputs[4]])
A = tf.concat(0,[A,outputs[5]])
A = tf.concat(0,[A,outputs[6]])
A = tf.concat(0,[A,outputs[7]])
A = tf.concat(0,[A,outputs[8]])
A = tf.concat(0,[A,outputs[9]])
B = tf.concat(0,[train_labels[0],train_labels[1]])
B = tf.concat(0,[B,train_labels[2]])
B = tf.concat(0,[B,train_labels[3]])
B = tf.concat(0,[B,train_labels[4]])
B = tf.concat(0,[B,train_labels[5]])
B = tf.concat(0,[B,train_labels[6]])
B = tf.concat(0,[B,train_labels[7]])
B = tf.concat(0,[B,train_labels[8]])
B = tf.concat(0,[B,train_labels[9]])

logits = tf.nn.xw_plus_b(tf.concat(0, A), w, b)
loss = tf.reduce_mean(
  tf.nn.softmax_cross_entropy_with_logits(
    labels=tf.concat(0, B), logits=logits))

It can run!

Answered By: 陳立麟

explicitly writing the name of the arguments solves the problem.

instead of

net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])

use

net = tf.concat(axis=3, values=[branch_0, branch_1, branch_2, branch_3])
Answered By: Fariborz Ghavamian

I found most people answering wrong way. Its just due to the change in the tf.concat.
It works in the following way.

net = tf.concat(3, [branch_0, branch_1, branch_2, branch_3])

use the following

net = tf.concat(values=[branch_0, branch_1, branch_2, branch_3],axis=3,)

Remember while passing the keyword arguments should be before the others.

Answered By: nabin