在Keras中进行文本分类时出错
Posted
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了在Keras中进行文本分类时出错相关的知识,希望对你有一定的参考价值。
我在Keras做文本分类。首先,我使用Word2Vec创建一个嵌入矩阵并将其传递给Keras Embedding
图层。然后我在它上面运行Conv1D
。这是我正在使用的dataset。这是我的代码如下:
from gensim.models import Word2Vec
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
import numpy as np
from keras.models import Sequential
from keras.layers import Embedding,Flatten,Dense,Conv1D,MaxPooling1D,GlobalMaxPooling1D
from sklearn.preprocessing import LabelEncoder
from keras.utils import np_utils
import pandas as pd
from nltk.tokenize import word_tokenize
# def dataframe_to_list_of_words(df_name, col):
# df = pd.read_csv(df_name)
# lst = df[col].drop_duplicates().values.tolist()
# tokenized_sents = [word_tokenize(i) for i in lst]
# tokenized_sents_mod = [word for sublist in tokenized_sents for word in sublist]
# return tokenized_sents_mod
# def convert_data_to_index(string_data, wv):
# index_data = []
# for word in string_data:
# if word in wv:
# index_data.append(wv.vocab[word].index)
# return index_data
df=pd.read_csv('emotion_merged_dataset.csv')
texts=df['text']
labels=df['sentiment']
df_tokenized=df.apply(lambda row: word_tokenize(row['text']), axis=1)
model = Word2Vec(df_tokenized, min_count=1,size=300)
##############
embedding_matrix = np.zeros((len(model.wv.vocab), 300))
for i in range(len(model.wv.vocab)):
# print(model.wv.index2word[i])
embedding_vector = model.wv[model.wv.index2word[i]]
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
################
labels=df['sentiment']
encoder = LabelEncoder()
encoder.fit(labels)
encoded_Y = encoder.transform(labels)
labels_encoded= np_utils.to_categorical(encoded_Y)
#########################
maxlen=30
tokenizer = Tokenizer(3000)
tokenizer.fit_on_texts(texts)
sequences = tokenizer.texts_to_sequences(texts)
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
data = pad_sequences(sequences, maxlen=37)
############################
embeddings = Embedding(input_dim=embedding_matrix.shape[0], output_dim=embedding_matrix.shape[1],
weights=[embedding_matrix],trainable=False)
model=Sequential()
model.add(embeddings)
model.add(Conv1D(32,7,activation='relu'))
model.add(MaxPooling1D(5))
model.add(Conv1D(32,7,activation='relu'))
model.add(GlobalMaxPooling1D())
model.add(Dense(labels_encoded.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(data, labels, validation_split=0.2, epochs=10, batch_size=100)
我运行代码时出现以下错误:
Error when checking target: expected dense_1 to have shape (None, 8) but got array with shape (19283, 1)
有人可以帮帮我吗?
答案
您已将标签编码为分类,但实际上并未使用此结果。更改:
model.fit(data, labels, validation_split=0.2, epochs=10, batch_size=100)
至 ...
model.fit(data, labels_encoded, validation_split=0.2, epochs=10, batch_size=100)
以上是关于在Keras中进行文本分类时出错的主要内容,如果未能解决你的问题,请参考以下文章
如何使用 google vision api 从图像中进行文本检测?
java对于绘制的矩形,如何用鼠标双击可以选中这个矩形,并且可以通过方法在举行框中进行文本编辑.
使用 TPU 运行时在 Google Colab 上训练 Keras 模型时出错
Keras 图像分类:检查输入时出错:预期 input_1 有 4 个维度,但得到了形状为 (6885、7500) 的数组