TypeError:获取参数数组的类型无效 numpy.ndarray,必须是字符串或张量。 (不能将 ndarray 转换为张量或操作。)
Posted
技术标签:
【中文标题】TypeError:获取参数数组的类型无效 numpy.ndarray,必须是字符串或张量。 (不能将 ndarray 转换为张量或操作。)【英文标题】:TypeError: Fetch argument array has invalid type numpy.ndarray, must be a string or Tensor. (Can not convert a ndarray into a Tensor or Operation.) 【发布时间】:2018-05-17 20:29:04 【问题描述】:我试图在 siaseme LSTM 中重现结果,以比较这里两个句子的语义相似性:- https://github.com/dhwajraj/deep-siamese-text-similarity
我正在使用 tensorflow 1.4 和 python 2.7
train.py 工作正常。为了评估模型,我创建了一个 match_valid.tsv 文件,它是那里可用的“train_snli.txt”的一个子集。我已经修改了 input_helpers.py 文件中的 getTsvTestData 函数。
def getTsvTestData(self, filepath):
print("Loading testing/labelled data from "+filepath+"\n")
x1=[]
x2=[]
y=[]
# positive samples from file
for line in open(filepath):
l=line.strip().split("\t")
if len(l)<3:
continue
x1.append(l[1].lower()) # text
x2.append(l[0].lower()) # text
y.append(int(l[2])) # similarity score 0 or 1
return np.asarray(x1),np.asarray(x2),np.asarray(y)
我在 eval.py 中的这部分代码中遇到错误
for db in batches:
x1_dev_b,x2_dev_b,y_dev_b = zip(*db)
#x1_dev_b = tf.convert_to_tensor(x1_dev_b,)
print("type x1_dev_b ".format(type(x1_dev_b))) # tuple
print("type x2_dev_b ".format(type(x2_dev_b))) # tuple
print("type y_dev_b \n".format(type(y_dev_b))) # tuple
feed = input_x1: x1_dev_b,
input_x2: x2_dev_b,
input_y:y_dev_b,
dropout_keep_prob: 1.0
batch_predictions, batch_acc, sim = sess.run([predictions,accuracy,sim], feed_dict=feed)
print("type batch_predictions ".format(type(batch_predictions))) # numpy.ndarray
print("type batch_acc ".format(type(batch_acc))) # numpy.float32
print("type sim ".format(type(sim))) # numpy.ndarray
all_predictions = np.concatenate([all_predictions, batch_predictions])
print("\n printing batch predictions \n".format(batch_predictions))
all_d = np.concatenate([all_d, sim])
print("DEV acc \n".format(batch_acc))
我收到这个错误。我尝试在 sess.run() 中使用 print 语句来查找类型,但它不起作用。
Traceback (most recent call last):
File "eval.py", line 92, in <module>
batch_predictions, batch_acc, sim = sess.run([predictions,accuracy,sim], feed_dict=feed)
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 889, in run
run_metadata_ptr)
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 1105, in _run
self._graph, fetches, feed_dict_tensor, feed_handles=feed_handles)
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 414, in __init__
self._fetch_mapper = _FetchMapper.for_fetch(fetches)
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 234, in for_fetch
return _ListFetchMapper(fetch)
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 341, in __init__
self._mappers = [_FetchMapper.for_fetch(fetch) for fetch in fetches]
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 242, in for_fetch
return _ElementFetchMapper(fetches, contraction_fn)
File "/home/joe/anaconda2/lib/python2.7/site-packages/tensorflow/python/client/session.py", line 275, in __init__
% (fetch, type(fetch), str(e)))
TypeError: Fetch argument array([ 1., 1., 0., 0., 0., 1., 1., 0., 1., 0., 0., 1., 0.,
0., 0., 1., 1., 0., 0., 1., 0., 0., 0., 1., 0., 0.,
0., 1., 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 1.,
0., 0., 1., 1., 1., 0., 1., 1., 0., 1., 1., 1., 1.,
1., 0., 0., 0., 0., 1., 0., 1., 1., 0., 0., 1., 0.,
0., 0., 0., 0., 0., 0., 1., 1., 1., 1., 1., 1., 0.,
0., 0., 0., 1., 0., 0., 0., 0., 0., 1., 0., 0., 0.,
0., 0., 1., 1., 0., 0., 0., 1., 1., 1., 0., 0., 0.,
0., 0., 0., 1., 1., 0., 0., 0., 1., 0., 0., 0., 0.,
0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
1., 0., 0., 1., 0., 0., 1., 0., 1., 1., 0., 1., 0.,
0., 0., 0., 0., 0., 1., 1., 0., 0., 1., 0., 0., 0.,
1., 1., 1., 1., 0., 1., 1., 0., 0., 1., 0., 0., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 0., 0., 0., 1., 0.,
0., 1., 0., 0., 1., 0., 0., 1., 1., 0., 0., 1., 0.,
0., 0., 1., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 1., 0., 0., 1., 0., 1., 1., 0., 1., 0., 1., 0.,
0., 0., 0., 1., 0., 0., 0., 1., 0., 1., 0., 0., 1.,
1., 0., 0., 1., 0., 1., 0., 0., 0.], dtype=float32) has invalid type <type 'numpy.ndarray'>, must be a string or Tensor. (Can not convert a ndarray into a Tensor or Operation.)
实际上,我正在尝试进行查询相似度,将查询向量与我的语料库中的所有文档向量进行比较,并根据相似度得分对句子进行排名。我知道目前 LSTM 只是将两个句子相互比较并将相似度输出为 0 或 1。我该怎么做?
【问题讨论】:
predictions
、accuracy
、sim
的定义是什么?其中至少一个是numpy
数组,而不是张量/操作。可能是您在加载数据时不小心重新定义了其中一个?
是的,我正在重新定义 sim,这导致了这个问题。现在解决了。
【参考方案1】:
问题是您正在替换 sim
的值,它(我想)最初包含对 TensorFlow 张量或操作的引用,并使用评估它的结果(这是一个 NumPy 数组),所以第二个迭代失败,因为 sim
不再是 TensorFlow 张量或操作。
你可以试试这样的:
for db in batches:
x1_dev_b,x2_dev_b,y_dev_b = zip(*db)
#x1_dev_b = tf.convert_to_tensor(x1_dev_b,)
print("type x1_dev_b ".format(type(x1_dev_b))) # tuple
print("type x2_dev_b ".format(type(x2_dev_b))) # tuple
print("type y_dev_b \n".format(type(y_dev_b))) # tuple
feed = input_x1: x1_dev_b,
input_x2: x2_dev_b,
input_y:y_dev_b,
dropout_keep_prob: 1.0
batch_predictions, batch_acc, batch_sim = sess.run([predictions,accuracy,sim], feed_dict=feed)
print("type batch_predictions ".format(type(batch_predictions))) # numpy.ndarray
print("type batch_acc ".format(type(batch_acc))) # numpy.float32
print("type batch_sim ".format(type(batch_sim))) # numpy.ndarray
all_predictions = np.concatenate([all_predictions, batch_predictions])
print("\n printing batch predictions \n".format(batch_predictions))
all_d = np.concatenate([all_d, batch_sim])
print("DEV acc \n".format(batch_acc))
【讨论】:
以上是关于TypeError:获取参数数组的类型无效 numpy.ndarray,必须是字符串或张量。 (不能将 ndarray 转换为张量或操作。)的主要内容,如果未能解决你的问题,请参考以下文章
TypeError:Fetch 参数 None 的类型无效 <type 'NoneType'>
PySpark向现有DataFrame添加列 - TypeError:无效参数,不是字符串或列
显示获取的数据问题:对象作为 React 子项/类型错误无效:未定义
获取“first_name”的Django是此函数的无效关键字参数“创建模型类实例时的TypeError”