lfw 数据集上的 vgg16 不获取 RGB

Posted

技术标签:

【中文标题】lfw 数据集上的 vgg16 不获取 RGB【英文标题】:vgg16 on lfw dataset Dont get RGB 【发布时间】:2020-10-08 17:37:05 【问题描述】:

我想在 LFW 数据集上使用 Keras 库中的 VGGNET-16 进行人脸识别。但是生成的是灰度而不是 RGB,我无法更改它:

【问题讨论】:

嗨@MahboobehNajafi,虽然我已经为您展示了图像,但它通常不是很好,因为我们无法重现您遇到的错误。您能否输入您使用的代码,如果可能,提供一个可重现的示例,以便我们查看您的错误来自何处并进行故障排除? 【参考方案1】:
      #load dataset
 lfw_people = fetch_lfw_people(min_faces_per_person=53, resize=0.4)


 

# introspect the images arrays to find the shapes (for plotting)
n_samples, h, w = lfw_people.images.shape

# for machine learning we use the 2 data directly (as relative pixel
# positions info is ignored by this model)
# access the images

X = lfw_people.data
n_features = X.shape[1]


# the label to predict is the id of the person
# access the class labels
y = lfw_people.target
target_names = lfw_people.target_names
n_classes = target_names.shape[0]



print("Total dataset size:")
print("number of samples: " , n_samples)
print("number of classes: " , n_classes)
print("image dimensions: ", h, w)
print("number of features per image: ", h*w)


lfw_people.target_names

** 数组(['阿里尔沙龙','科林鲍威尔','唐纳德拉姆斯菲尔德','乔治W布什', “格哈德·施罗德”、“雨果·查韦斯”、“让·克雷蒂安”、 '约翰阿什克罗夫特','小泉纯一郎','托尼布莱尔'],dtype ='

  labelNames=["Ariel Sharon", "Colin Powell", "Donald Rumsfeld", "George W Bush",
           "Gerhard Schroeder", "Hugo Chavez", "Jean Chretien",
       Ahcroft", "Junichiro Koizumi", "Tony Blair"]
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.4)
print(X_train.shape)

**

(873, 1850)

**

   y_train= to_categorical(y_train, num_classes= 10)
    y_test = to_categorical(y_test,num_classes= 10)
    chanDim=1

改变大小

    img=[]
    for i in range(len(X_train)):
      z=cv2.resize(X_train[i],(224,224)).astype(np.float32)
      z = np.expand_dims(z, axis=0)
      img.append(z)
Xnew_train=np.array(img)
img=[]
for i in range(len(X_test)):
  r=cv2.resize(X_train[i],(224,224)).astype(np.float32)
  r = np.expand_dims(r, axis=0)
  img.append(r)
 Xnew_test=np.array(img)
     

这里我得到 1 的 3。bc LFW 是 RGB Xnew_train.shape

**

(873, 1, 224, 224)

**

    model = Sequential()
model.add(ZeroPadding2D((1,1),input_shape=(3,224,224)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(64, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th" , strides=(2,2)))

model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(128, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th" , strides=(2,2)))

model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th" , strides=(2,2)))

model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(BatchNormalization(axis=chanDim))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th" , strides=(2,2)))

model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(ZeroPadding2D((1,1)))
model.add(Convolution2D(256, 3, 3, activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2), dim_ordering="th" , strides=(2,2)))

model.add(Flatten())
model.add(Dense(512, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(512, activation='relu', kernel_regularizer=regularizers.l2(0.01)))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))


 model.summary()
# Let's train the model using RMSprop
model.compile(loss='categorical_crossentropy',
         



 optimizer="Adam",
              metrics=['accuracy'])
#model.summary()

这里出现错误,如果在 input_shape=(3,224,224) 中使用 3 中的 1,则测试准确度 很低

    #NUM_EPOCHS = 25
BS = 32
batch_size= 128
history=model.fit(Xnew_train,y_train,batch_size=batch_size,epochs=200,validation_split=0.2,verbose=1)

**错误是:

**

ValueError: Error when checking input: expected zero_padding2d_1_input to have shape (3, 224, 224) but got array with shape (1, 224, 224)

【讨论】:

以上是关于lfw 数据集上的 vgg16 不获取 RGB的主要内容,如果未能解决你的问题,请参考以下文章

如何在 LFW 数据集上训练 CNN?

Keras 函数模型验证准确率高,但预测不正确

使用 keras 预训练 vgg16 的感知损失,输出图像颜色不正确

使用 predict_generator 和 VGG16 的内存错误

使用 VGG 16 作为特征提取器的 U-net 类架构 - 连接层的问题

python计算多个模型在不同数据集上的预测概率获取每个数据集上的最优模型多个最优模型的ROC曲线进行对比分析