Python - 重复数据删除问题:TypeError:不可散列的类型:'numpy.ndarray'
Posted
技术标签:
【中文标题】Python - 重复数据删除问题:TypeError:不可散列的类型:\'numpy.ndarray\'【英文标题】:Python - Trouble with Dedupe: TypeError: unhashable type: 'numpy.ndarray'Python - 重复数据删除问题:TypeError:不可散列的类型:'numpy.ndarray' 【发布时间】:2015-01-16 20:41:08 【问题描述】:我无法运行重复数据删除。我正在尝试使用此库从大量地址中删除重复项。这是我的代码:
import collections
import logging
import optparse
from numpy import nan
import dedupe
from unidecode import unidecode
optp = optparse.OptionParser()
optp.add_option('-v', '--verbose', dest='verbose', action='count',
help='Increase verbosity (specify multiple times for more)'
)
(opts, args) = optp.parse_args()
log_level = logging.WARNING
if opts.verbose == 1:
log_level = logging.INFO
elif opts.verbose >= 2:
log_level = logging.DEBUG
logging.getLogger().setLevel(log_level)
input_file = 'H:/My Documents/Python Scripts/Dedupe/DupeTester.csv'
output_file = 'csv_example_output.csv'
settings_file = 'csv_example_learned_settings'
training_file = 'csv_example_training.json'
def preProcess(column):
import unidecode
column = column.decode("utf8")
column = unidecode.unidecode(column)
column = re.sub(' +', ' ', column)
column = re.sub('\n', ' ', column)
column = column.strip().strip('"').strip("'").lower().strip()
return column
def readData(filename):
data_d =
with open(filename) as f:
reader = csv.DictReader(f)
for row in reader:
clean_row = [(k, preProcess(v)) for (k, v) in row.items()]
row_id = int(row[''])
data_d[row_id] = dict(clean_row)
return data_d
print 'importing data ...'
data_d = readData(input_file)
if os.path.exists(settings_file):
print 'reading from', settings_file
with open(settings_file, 'rb') as f:
deduper = dedupe.StaticDedupe(f)
else:
fields = [
"field" : "fulladdr", "type" : "Address",
"field" : "zip", "type" : "ShortString",
]
deduper = dedupe.Dedupe(fields)
deduper.sample(data_d, 200)
if os.path.exists(training_file):
print 'reading labeled examples from ', training_file
with open(training_file, 'rb') as f:
deduper.readTraining(f)
print 'starting active labeling...'
dedupe.consoleLabel(deduper)
deduper.train()
with open(training_file, 'w') as tf :
deduper.writeTraining(tf)
with open(settings_file, 'w') as sf :
deduper.writeSettings(sf)
print 'blocking...'
threshold = deduper.threshold(data_d, recall_weight=2)
print 'clustering...'
clustered_dupes = deduper.match(data_d, threshold)
print '# duplicate sets', len(clustered_dupes)
cluster_membership =
cluster_id = 0
for (cluster_id, cluster) in enumerate(clustered_dupes):
id_set, scores = cluster
cluster_d = [data_d[c] for c in id_set]
canonical_rep = dedupe.canonicalize(cluster_d)
for record_id, score in zip(id_set, scores) :
cluster_membership[record_id] =
"cluster id" : cluster_id,
"canonical representation" : canonical_rep,
"confidence": score
singleton_id = cluster_id + 1
with open(output_file, 'w') as f_output:
writer = csv.writer(f_output)
with open(input_file) as f_input :
reader = csv.reader(f_input)
heading_row = reader.next()
heading_row.insert(0, 'confidence_score')
heading_row.insert(0, 'Cluster ID')
canonical_keys = canonical_rep.keys()
for key in canonical_keys:
heading_row.append('canonical_' + key)
writer.writerow(heading_row)
for row in reader:
row_id = int(row[0])
if row_id in cluster_membership :
cluster_id = cluster_membership[row_id]["cluster id"]
canonical_rep = cluster_membership[row_id]["canonical representation"]
row.insert(0, cluster_membership[row_id]['confidence'])
row.insert(0, cluster_id)
for key in canonical_keys:
row.append(canonical_rep[key].encode('utf8'))
else:
row.insert(0, None)
row.insert(0, singleton_id)
singleton_id += 1
for key in canonical_keys:
row.append(None)
writer.writerow(row)
具体来说,当我运行它时,我会得到以下信息:
C:\Anaconda\lib\site-packages\dedupe\core.py:18: UserWarning: There may be duplicates in the sample
warnings.warn("There may be duplicates in the sample")
Traceback (most recent call last):
File "<ipython-input-1-33e46d604c5f>", line 1, in <module>
runfile('H:/My Documents/Python Scripts/Dedupe/dupetestscript.py', wdir='H:/My Documents/Python Scripts/Dedupe')
File "C:\Anaconda\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 580, in runfile
execfile(filename, namespace)
File "H:/My Documents/Python Scripts/Dedupe/dupetestscript.py", line 67, in <module>
deduper.sample(data_d, 200)
File "C:\Anaconda\lib\site-packages\dedupe\api.py", line 924, in sample
random_sample_size))
TypeError: unhashable type: 'numpy.ndarray'
【问题讨论】:
您确定您的示例代码完整吗?有关您的代码未使用的 numpy 数组的错误消息。它从 numpy 导入的唯一东西是未使用的 NAN。 无法复现,代码未简化为实际问题。 @strpeter 这并不奇怪,因为这是代码中的一个错误,已被标记并解决。看这里:github.com/datamade/dedupe/issues/336 【参考方案1】:可以更改 numpy 数组(它是“可变的”)。 Python 通过使用键的哈希值而不是键来加速字典访问。
因此,只有数字、字符串或元组等可散列对象才能用作字典中的键。来自 Python 词汇表中 hashable 的定义:
如果一个对象有一个在其生命周期内永远不会改变的哈希值(它需要一个
__hash__
() 方法),那么它是可哈希的,并且可以是 与其他对象相比(它需要一个__eq__
() 方法)。可散列 比较相等的对象必须具有相同的哈希值。哈希性使对象可用作字典键和集合成员,因为这些数据结构在内部使用哈希值。
所有 Python 的不可变内置对象都是可散列的,而没有可变容器(例如列表或字典)是可散列的。对象 默认情况下,用户定义类的实例是可散列的;他们 所有比较不相等(除了自己),它们的哈希值为 源自他们的
id
()。
【讨论】:
这很不幸。如果你在 Pandas 中工作并想创建一个 dict 怎么办?熊猫在引擎盖下都是麻木的...... 您可以在 Pandas 数据框或 numpy 数组中从不可变数据创建字典键。您不能将数组或数据框本身用作 key。不过,您可以将其用作值。以上是关于Python - 重复数据删除问题:TypeError:不可散列的类型:'numpy.ndarray'的主要内容,如果未能解决你的问题,请参考以下文章