使用预先训练的BERT模型对多类文本分类进行错误分类
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了使用预先训练的BERT模型对多类文本分类进行错误分类相关的知识,希望对你有一定的参考价值。
我正在尝试使用Google的BERT预训练模型对34个互斥类中的文本进行分类。准备好“训练”,“开发”和“测试”BERT期望作为输入的TSV文件后,我尝试在我的Colab(Jupyter)笔记本中执行以下命令
!python bert/run_classifier.py
--task_name=cola
--do_train=true
--do_eval=true
--data_dir=./Bert_Input_Folder
--vocab_file=./uncased_L-24_H-1024_A-16/vocab.txt
--bert_config_file=./uncased_L-24_H-1024_A-16/bert_config.json
--init_checkpoint=./uncased_L-24_H-1024_A-16/bert_model.ckpt
--max_seq_length=512
--train_batch_size=32
--learning_rate=2e-5
--num_train_epochs=3.0
--output_dir=./Bert_Output_Folder
我收到以下错误
WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.
WARNING:tensorflow:Estimator's model_fn (<function model_fn_builder..model_fn at 0x7f4b945a01e0>) includes params argument, but params are not passed to Estimator.
INFO:tensorflow:Using config: '_model_dir': './Bert_Output_Folder', '_tf_random_seed': None, '_save_summary_steps': 100, '_save_checkpoints_steps': 1000, '_save_checkpoints_secs': None, '_session_config': allow_soft_placement: true
graph_options
rewrite_options
meta_optimizer_iterations: ONE
, '_keep_checkpoint_max': 5, '_keep_checkpoint_every_n_hours': 10000, '_log_step_count_steps': None, '_train_distribute': None, '_device_fn': None, '_protocol': None, '_eval_distribute': None, '_experimental_distribute': None, '_service': None, '_cluster_spec': <tensorflow.python.training.server_lib.ClusterSpec object at 0x7f4b94f366a0>, '_task_type': 'worker', '_task_id': 0, '_global_id_in_cluster': 0, '_master': '', '_evaluation_master': '', '_is_chief': True, '_num_ps_replicas': 0, '_num_worker_replicas': 1, '_tpu_config': TPUConfig(iterations_per_loop=1000, num_shards=8, num_cores_per_replica=None, per_host_input_for_training=3, tpu_job_name=None, initial_infeed_sleep_secs=None, input_partition_dims=None), '_cluster': None
INFO:tensorflow:_TPUContext: eval_on_tpu True
WARNING:tensorflow:eval_on_tpu ignored because use_tpu is False.
INFO:tensorflow:Writing example 0 of 23834
Traceback (most recent call last):
File "bert/run_classifier.py", line 981, in
tf.app.run()
File "/usr/local/lib/python3.6/dist-packages/tensorflow/python/platform/app.py", line 125, in run
_sys.exit(main(argv))
File "bert/run_classifier.py", line 870, in main
train_examples, label_list, FLAGS.max_seq_length, tokenizer, train_file)
File "bert/run_classifier.py", line 490, in file_based_convert_examples_to_features
max_seq_length, tokenizer)
File "bert/run_classifier.py", line 459, in convert_single_example
label_id = label_map[example.label]
KeyError: '33'
在“run_classifier.py”脚本中,我修改了最初为二进制分类任务编写的“get_labels()”函数,以返回我所有的34个类
def get_labels(self):
"""See base class."""
return ["0", "1", "2", ..., "33"]
知道什么是错的,或者我是否缺少额外的必要修改?
谢谢!
答案
只需在get_label函数中用['0', '1', '2', ... '33']
替换[str(x) for x in range(34)]
就解决了(两者实际上是等价的,但由于某些未知原因,这解决了这个问题)。
以上是关于使用预先训练的BERT模型对多类文本分类进行错误分类的主要内容,如果未能解决你的问题,请参考以下文章