如何解决函数'resize'中的错误(-215:断言失败)!ssize.empty()?

Posted

技术标签:

【中文标题】如何解决函数\'resize\'中的错误(-215:断言失败)!ssize.empty()?【英文标题】:How to solve error (-215:Assertion failed) !ssize.empty() in function 'resize'?如何解决函数'resize'中的错误(-215:断言失败)!ssize.empty()? 【发布时间】:2021-12-19 08:30:31 【问题描述】:

我正在开展一个深度学习项目(表面缺陷检测),以检测图像中的裂缝和补丁缺陷(原始大小为 (2160,3840,3))和两个标签(裂缝和补丁)。在我在新数据上测试模型时训练我的模型后,它会生成以下错误。作为参考,我在下面给出我的代码。

error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function 'resize' 

train_dir = '/content/drive/MyDrive/usetrain/train'
test_dir = '/content/drive/MyDrive/usetrain/test'

from glob import glob

train_cracks = glob('/content/drive/MyDrive/usetrain/train/cracks/*')

train_cracks[:10]

'/content/drive/MyDrive/usetrain/train/cracks/F04486-L462-03-2021-C29-PILACAVA-CLX01_5.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04701-L473-03-2021-C06-PILACAVA-CLX02_8.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04486-L462-03-2021-C45-PILACAVA-CLX01_5.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04704-L488-03-2021-C06-PILACAVA-CLX02_4.JPG',
 '/content/drive/MyDrive/usetrain/train/cracks/F04486-L462-03-2021-C40-PILACAVA-CLX01_6.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04701-L473-03-2021-C09-PILACAVA-CLX01_2.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04486-L462-03-2021-C18-PILACAVA-CLX01_1.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04486-L462-03-2021-C11-PILACAVA-CLX01_1.jpg',
 '/content/drive/MyDrive/usetrain/train/cracks/F04704-L488-03-2021-C01-PILACAVA-CLX01_6.JPG',
 '/content/drive/MyDrive/usetrain/train/cracks/F04486-L462-03-2021-C24-PILACAVA-CLX01_3.jpg']

 img1 = cv2.imread('/content/drive/MyDrive/usetrain/test/cracks/F04478-L397-04-2021-C05-PILACAVA-CLX02_8.jpg')
dimensions = img.shape
print(dimensions)
 (2160, 3840, 3)

 train_patches = glob('/content/drive/MyDrive/usetrain/train/patches/*')

 test_imgs = glob('/content/drive/MyDrive/usetrain/test/*')

 train_imgs = train_cracks[:35] + train_patches[:35]
# slice the dataset and use 2000 in each class
random.shuffle(train_imgs) # shuffle it randomly

 
Function for labels and images

 #A function to read and process the images to an acceptable format for our model
def read_and_process_image(list_of_images):
    '''
     Returns two arravs
       X is an array of resized images
       y is an array of labels
     '''

    X=[] #Images
    y = [] #labels

    for image in list_of_images:
        X.append(cv2.resize(cv2.imread(image, cv2. IMREAD_COLOR), (nrows, ncolumns), interpolation=cv2. INTER_CUBIC)) #Read the image
        #get the labels
        if'cracks' in image:
            y. append(1)
        elif 'patches'in image:
            y. append(0)
    return X,y

 X, y = read_and_process_image(train_imgs)

 X[0]
 array([[[182, 200, 207],
        [187, 199, 209],
        [189, 199, 208],
        ...,
        [106, 121, 136],
        [110, 124, 142],
        [126, 134, 147]],

       [[188, 201, 209],
        [188, 199, 207],
        [192, 203, 211],
        ...,
        [109, 124, 138],
        [110, 127, 136],
        [124, 136, 150]],

       [[188, 201, 209],
        [193, 205, 209],
        [187, 201, 210],
        ...,
        [107, 124, 136],
        [110, 126, 138],
        [117, 129, 139]],

       ...,

       [[196, 199, 203],
        [212, 215, 219],
        [216, 220, 224],
        ...,
        [130, 144, 156],
        [127, 144, 153],
        [134, 152, 163]],

       [[187, 190, 194],
        [210, 213, 217],
        [211, 215, 219],
        ...,
        [128, 144, 152],
        [131, 149, 158],
        [132, 150, 161]],

       [[180, 183, 187],
        [204, 209, 212],
        [210, 217, 220],
        ...,
        [123, 140, 148],
        [133, 152, 159],
        [135, 154, 162]]], dtype=uint8)

 y
 [1,
 1,
 0,
 1,
 1,
 0,
 1,
 1,
 1,
 0,
 1,
 1,
 1,
 0,
 0,
 0,
 1,
 0,
 1,
 0,
 0,
 1,
 0,
 0,
 1,
 1,
 1,
 0,
 0,
 1,
 0,
 1,
 0,
 0,
 0,
 1,
 1,
 0,
 1,
 1,
 0,
 1,
 0,
 1,
 1,
 1,
 0,
 1,
 1,
 1,
 0,
 0,
 0,
 0,
 0,
 0,
 1,
 0,
 1,
 0,
 1,
 1,
 0,
 0,
 0,
 0,
 1,
 0,
 1,
 0]

 plt. figure(figsize=(20,10))
columns = 5
for i in range (columns) :
    plt. subplot (5 / columns + 1, columns, i + 1)
    plt.imshow(X[i])
 

 import seaborn as sns

#Convert list to numpy array
X = np.array(X)
y = np.array(y)

#Lets plot the label to be sure we just have two class
sns.countplot(y)

plt.title('Labels for Cracks and Patches')
 

 print( "Shape of train images is:", X. shape)
print ("Shape of labels is:", y. shape)
 Shape of train images is: (70, 150, 150, 3)
Shape of labels is: (70,)

 #Lets split the data into train and test set
from sklearn.model_selection import train_test_split
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.20, random_state=2)

print("Shape of train images is:", X_train.shape)
print("Shape of validation images is:", X_val.shape)
print("Shape of labels is:", y_train. shape)
print("Shape of labels is:", y_val.shape)
 Shape of train images is: (56, 150, 150, 3)
Shape of validation images is: (14, 150, 150, 3)
Shape of labels is: (56,)
Shape of labels is: (14,)

 #get the length of the train and validation data
ntrain = len(X_train)
nval = len(X_val)


batch_size = 4

 #Keras Model

 from keras import layers
from keras import models
from keras import optimizers
from keras.preprocessing.image import ImageDataGenerator
from keras.preprocessing.image import img_to_array, load_img

 model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5)) #Dropout for regularization
model.add(layers.Dense (512, activation='relu'))
model.add(layers.Dense(1, activation='sigmoid'))

 model.summary()
 Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d (Conv2D)              (None, 148, 148, 32)      896       
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 74, 74, 32)        0         
_________________________________________________________________
conv2d_1 (Conv2D)            (None, 72, 72, 64)        18496     
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 36, 36, 64)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 34, 34, 128)       73856     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 17, 17, 128)       0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 15, 15, 128)       147584    
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
flatten (Flatten)            (None, 6272)              0         
_________________________________________________________________
dropout (Dropout)            (None, 6272)              0         
_________________________________________________________________
dense (Dense)                (None, 512)               3211776   
_________________________________________________________________
dense_1 (Dense)              (None, 1)                 513       
=================================================================
Total params: 3,453,121
Trainable params: 3,453,121
Non-trainable params: 0
_________________________________________________________________

 from tensorflow.keras import optimizers

 #We'Il use the RMSprop bptimizer with a learning rate of .0001 
#We'1l use binary_crossenropy loss because its a binary classification
model.compile(loss='binary_crossentropy', optimizer=optimizers.RMSprop(learning_rate=1e-4), metrics=['acc'])

 #Lets create the augmentation configuration

train_datagen = ImageDataGenerator (rescale=1./255,
                   rotation_range=40,
                width_shift_range=0.2,
                  height_shift_range=0.2,
                      shear_range=0.2,
                       zoom_range=0.2,horizontal_flip=True,)

val_datagen = ImageDataGenerator(rescale=1./255)

 #Create the image generators
train_generator = train_datagen.flow(X_train, y_train, batch_size=batch_size)
val_generator = val_datagen.flow(X_val, y_val, batch_size=batch_size)

 #The training part

history = model.fit(train_generator,
steps_per_epoch= ntrain // batch_size,
epochs=100,
validation_data=val_generator,validation_steps= nval//batch_size)
 Epoch 1/100
14/14 [==============================] - 3s 52ms/step - loss: 0.7155 - acc: 0.4464 - val_loss: 0.7217 - val_acc: 0.4167
Epoch 2/100
14/14 [==============================] - 1s 35ms/step - loss: 0.7124 - acc: 0.4107 - val_loss: 0.6927 - val_acc: 0.5000
Epoch 3/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6994 - acc: 0.5000 - val_loss: 0.7094 - val_acc: 0.4167
Epoch 4/100
14/14 [==============================] - 1s 38ms/step - loss: 0.7052 - acc: 0.4821 - val_loss: 0.6899 - val_acc: 0.7500
Epoch 5/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6988 - acc: 0.4643 - val_loss: 0.6958 - val_acc: 0.4167
Epoch 6/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6984 - acc: 0.5000 - val_loss: 0.6925 - val_acc: 0.5833
Epoch 7/100
14/14 [==============================] - 1s 35ms/step - loss: 0.7043 - acc: 0.4286 - val_loss: 0.6923 - val_acc: 0.5833
Epoch 8/100
14/14 [==============================] - 1s 40ms/step - loss: 0.6980 - acc: 0.5000 - val_loss: 0.6939 - val_acc: 0.5000
Epoch 9/100
14/14 [==============================] - 1s 36ms/step - loss: 0.7056 - acc: 0.3750 - val_loss: 0.6931 - val_acc: 0.5000
Epoch 10/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6975 - acc: 0.5536 - val_loss: 0.6957 - val_acc: 0.4167
Epoch 11/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6974 - acc: 0.4286 - val_loss: 0.6956 - val_acc: 0.4167
Epoch 12/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6965 - acc: 0.5000 - val_loss: 0.6902 - val_acc: 0.5000
Epoch 13/100
14/14 [==============================] - 1s 40ms/step - loss: 0.6944 - acc: 0.5179 - val_loss: 0.6920 - val_acc: 0.5000
Epoch 14/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6997 - acc: 0.4821 - val_loss: 0.6920 - val_acc: 0.5000
Epoch 15/100
14/14 [==============================] - 1s 42ms/step - loss: 0.6962 - acc: 0.4464 - val_loss: 0.6938 - val_acc: 0.5000
Epoch 16/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6940 - acc: 0.5714 - val_loss: 0.6995 - val_acc: 0.4167
Epoch 17/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6906 - acc: 0.5179 - val_loss: 0.6956 - val_acc: 0.5000
Epoch 18/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6944 - acc: 0.5179 - val_loss: 0.6836 - val_acc: 0.5833
Epoch 19/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6989 - acc: 0.4821 - val_loss: 0.6953 - val_acc: 0.4167
Epoch 20/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6960 - acc: 0.4464 - val_loss: 0.6905 - val_acc: 0.6667
Epoch 21/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6952 - acc: 0.4821 - val_loss: 0.6903 - val_acc: 0.5000
Epoch 22/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6951 - acc: 0.5357 - val_loss: 0.6928 - val_acc: 0.5000
Epoch 23/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6979 - acc: 0.4107 - val_loss: 0.6907 - val_acc: 0.5833
Epoch 24/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6949 - acc: 0.5179 - val_loss: 0.6912 - val_acc: 0.5000
Epoch 25/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6932 - acc: 0.5357 - val_loss: 0.6882 - val_acc: 0.5833
Epoch 26/100
14/14 [==============================] - 1s 40ms/step - loss: 0.6989 - acc: 0.4286 - val_loss: 0.6910 - val_acc: 0.5000
Epoch 27/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6940 - acc: 0.5179 - val_loss: 0.6904 - val_acc: 0.4167
Epoch 28/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6881 - acc: 0.5000 - val_loss: 0.6898 - val_acc: 0.5000
Epoch 29/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6898 - acc: 0.5357 - val_loss: 0.6899 - val_acc: 0.5833
Epoch 30/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6878 - acc: 0.5714 - val_loss: 0.6921 - val_acc: 0.5833
Epoch 31/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6911 - acc: 0.6071 - val_loss: 0.6911 - val_acc: 0.5000
Epoch 32/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6831 - acc: 0.6071 - val_loss: 0.6873 - val_acc: 0.5000
Epoch 33/100
14/14 [==============================] - 1s 37ms/step - loss: 0.7060 - acc: 0.5536 - val_loss: 0.6945 - val_acc: 0.5833
Epoch 34/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6813 - acc: 0.5893 - val_loss: 0.7053 - val_acc: 0.5000
Epoch 35/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6911 - acc: 0.5536 - val_loss: 0.6920 - val_acc: 0.5000
Epoch 36/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6879 - acc: 0.5893 - val_loss: 0.6875 - val_acc: 0.5833
Epoch 37/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6726 - acc: 0.5714 - val_loss: 0.6945 - val_acc: 0.5000
Epoch 38/100
14/14 [==============================] - 1s 36ms/step - loss: 0.6677 - acc: 0.6250 - val_loss: 0.6694 - val_acc: 0.5000
Epoch 39/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6877 - acc: 0.5893 - val_loss: 0.6891 - val_acc: 0.5000
Epoch 40/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6633 - acc: 0.5714 - val_loss: 0.7008 - val_acc: 0.5000
Epoch 41/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6667 - acc: 0.5893 - val_loss: 0.6740 - val_acc: 0.6667
Epoch 42/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6540 - acc: 0.6607 - val_loss: 0.7073 - val_acc: 0.4167
Epoch 43/100
14/14 [==============================] - 1s 41ms/step - loss: 0.6649 - acc: 0.5893 - val_loss: 0.7120 - val_acc: 0.5000
Epoch 44/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6808 - acc: 0.5000 - val_loss: 0.6928 - val_acc: 0.5833
Epoch 45/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6486 - acc: 0.6250 - val_loss: 0.7054 - val_acc: 0.4167
Epoch 46/100
14/14 [==============================] - 1s 41ms/step - loss: 0.6472 - acc: 0.5714 - val_loss: 0.6984 - val_acc: 0.5833
Epoch 47/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6636 - acc: 0.5714 - val_loss: 0.8272 - val_acc: 0.5000
Epoch 48/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6599 - acc: 0.5179 - val_loss: 0.6857 - val_acc: 0.5833
Epoch 49/100
14/14 [==============================] - 1s 40ms/step - loss: 0.6593 - acc: 0.6786 - val_loss: 0.7024 - val_acc: 0.5833
Epoch 50/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6481 - acc: 0.6071 - val_loss: 0.6696 - val_acc: 0.5833
Epoch 51/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6189 - acc: 0.5893 - val_loss: 0.7829 - val_acc: 0.5000
Epoch 52/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6671 - acc: 0.5357 - val_loss: 0.6825 - val_acc: 0.3333
Epoch 53/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6594 - acc: 0.6071 - val_loss: 0.6937 - val_acc: 0.5000
Epoch 54/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6382 - acc: 0.5000 - val_loss: 0.6992 - val_acc: 0.5000
Epoch 55/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6456 - acc: 0.6071 - val_loss: 0.7051 - val_acc: 0.5000
Epoch 56/100
14/14 [==============================] - 1s 37ms/step - loss: 0.5973 - acc: 0.6071 - val_loss: 0.7803 - val_acc: 0.5000
Epoch 57/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6598 - acc: 0.6429 - val_loss: 0.6868 - val_acc: 0.4167
Epoch 58/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6090 - acc: 0.6786 - val_loss: 0.6781 - val_acc: 0.5833
Epoch 59/100
14/14 [==============================] - 1s 39ms/step - loss: 0.5979 - acc: 0.6964 - val_loss: 0.6984 - val_acc: 0.5833
Epoch 60/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6809 - acc: 0.5893 - val_loss: 0.6575 - val_acc: 0.5000
Epoch 61/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6201 - acc: 0.7321 - val_loss: 0.7751 - val_acc: 0.5000
Epoch 62/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6777 - acc: 0.6786 - val_loss: 0.6880 - val_acc: 0.4167
Epoch 63/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6538 - acc: 0.6607 - val_loss: 0.7105 - val_acc: 0.4167
Epoch 64/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6501 - acc: 0.6071 - val_loss: 0.6990 - val_acc: 0.5833
Epoch 65/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6747 - acc: 0.6250 - val_loss: 0.6909 - val_acc: 0.5000
Epoch 66/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6176 - acc: 0.6607 - val_loss: 0.7157 - val_acc: 0.5833
Epoch 67/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6256 - acc: 0.6607 - val_loss: 0.7766 - val_acc: 0.4167
Epoch 68/100
14/14 [==============================] - 1s 37ms/step - loss: 0.5733 - acc: 0.6786 - val_loss: 0.7187 - val_acc: 0.4167
Epoch 69/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6577 - acc: 0.6607 - val_loss: 0.6613 - val_acc: 0.5833
Epoch 70/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6062 - acc: 0.6429 - val_loss: 0.9178 - val_acc: 0.4167
Epoch 71/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6403 - acc: 0.6250 - val_loss: 0.8255 - val_acc: 0.5833
Epoch 72/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6436 - acc: 0.6429 - val_loss: 0.7849 - val_acc: 0.4167
Epoch 73/100
14/14 [==============================] - 1s 38ms/step - loss: 0.5983 - acc: 0.6250 - val_loss: 0.8304 - val_acc: 0.5000
Epoch 74/100
14/14 [==============================] - 1s 41ms/step - loss: 0.6738 - acc: 0.5714 - val_loss: 0.7302 - val_acc: 0.4167
Epoch 75/100
14/14 [==============================] - 1s 36ms/step - loss: 0.5941 - acc: 0.6607 - val_loss: 0.7187 - val_acc: 0.5000
Epoch 76/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6028 - acc: 0.6786 - val_loss: 0.7375 - val_acc: 0.4167
Epoch 77/100
14/14 [==============================] - 1s 37ms/step - loss: 0.6261 - acc: 0.6429 - val_loss: 0.7407 - val_acc: 0.4167
Epoch 78/100
14/14 [==============================] - 1s 38ms/step - loss: 0.5387 - acc: 0.6964 - val_loss: 0.7237 - val_acc: 0.5000
Epoch 79/100
14/14 [==============================] - 1s 38ms/step - loss: 0.5672 - acc: 0.6607 - val_loss: 0.9385 - val_acc: 0.5000
Epoch 80/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6389 - acc: 0.6964 - val_loss: 0.8926 - val_acc: 0.5000
Epoch 81/100
14/14 [==============================] - 1s 41ms/step - loss: 0.5603 - acc: 0.7321 - val_loss: 0.9412 - val_acc: 0.5000
Epoch 82/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6297 - acc: 0.6964 - val_loss: 0.7572 - val_acc: 0.5833
Epoch 83/100
14/14 [==============================] - 1s 40ms/step - loss: 0.5175 - acc: 0.6964 - val_loss: 0.7978 - val_acc: 0.4167
Epoch 84/100
14/14 [==============================] - 1s 37ms/step - loss: 0.5738 - acc: 0.6964 - val_loss: 0.7796 - val_acc: 0.5833
Epoch 85/100
14/14 [==============================] - 1s 40ms/step - loss: 0.5711 - acc: 0.6429 - val_loss: 1.0373 - val_acc: 0.5000
Epoch 86/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6091 - acc: 0.6786 - val_loss: 0.7931 - val_acc: 0.4167
Epoch 87/100
14/14 [==============================] - 1s 42ms/step - loss: 0.6229 - acc: 0.6964 - val_loss: 0.7175 - val_acc: 0.5000
Epoch 88/100
14/14 [==============================] - 1s 40ms/step - loss: 0.5781 - acc: 0.6607 - val_loss: 0.7392 - val_acc: 0.6667
Epoch 89/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6144 - acc: 0.6250 - val_loss: 0.7039 - val_acc: 0.6667
Epoch 90/100
14/14 [==============================] - 1s 39ms/step - loss: 0.5126 - acc: 0.7143 - val_loss: 0.7843 - val_acc: 0.5000
Epoch 91/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6374 - acc: 0.6607 - val_loss: 0.7806 - val_acc: 0.5000
Epoch 92/100
14/14 [==============================] - 1s 37ms/step - loss: 0.7390 - acc: 0.5714 - val_loss: 0.7436 - val_acc: 0.5000
Epoch 93/100
14/14 [==============================] - 1s 39ms/step - loss: 0.6417 - acc: 0.6250 - val_loss: 0.7791 - val_acc: 0.4167
Epoch 94/100
14/14 [==============================] - 1s 38ms/step - loss: 0.6310 - acc: 0.6786 - val_loss: 0.6743 - val_acc: 0.5000
Epoch 95/100
14/14 [==============================] - 1s 37ms/step - loss: 0.5615 - acc: 0.6429 - val_loss: 0.8794 - val_acc: 0.5833
Epoch 96/100
14/14 [==============================] - 1s 38ms/step - loss: 0.5540 - acc: 0.7321 - val_loss: 0.9387 - val_acc: 0.6667
Epoch 97/100
14/14 [==============================] - 1s 38ms/step - loss: 0.5913 - acc: 0.7321 - val_loss: 0.8006 - val_acc: 0.4167
Epoch 98/100
14/14 [==============================] - 1s 38ms/step - loss: 0.5682 - acc: 0.7679 - val_loss: 0.7195 - val_acc: 0.4167
Epoch 99/100
14/14 [==============================] - 1s 41ms/step - loss: 0.6450 - acc: 0.7143 - val_loss: 0.7179 - val_acc: 0.5000
Epoch 100/100
14/14 [==============================] - 1s 42ms/step - loss: 0.5651 - acc: 0.6786 - val_loss: 0.7568 - val_acc: 0.4167

 #Now lets predict on the first 10 Images of the test set
X_test, y_test = read_and_process_image(test_imgs[:3])
x = np.array(X_test)
test_datagen = ImageDataGenerator (rescale=1./255)

error                                     Traceback (most recent call last)
<ipython-input-35-922ff9d9ecb5> in <module>()
      1 #Now lets predict on the first 10 Images of the test set
----> 2 X_test, y_test = read_and_process_image(test_imgs[:3])
      3 x = np.array(X_test)
      4 test_datagen = ImageDataGenerator (rescale=1./255)

<ipython-input-11-e3fe44f59019> in read_and_process_image(list_of_images)
     11 
     12     for image in list_of_images:
---> 13         X.append(cv2.resize(cv2.imread(image, cv2. IMREAD_COLOR), (nrows, ncolumns), interpolation=cv2. INTER_CUBIC)) #Read the image
     14         #get the labels
     15         if'cracks' in image:

error: OpenCV(4.1.2) /io/opencv/modules/imgproc/src/resize.cpp:3720: error: (-215:Assertion failed) !ssize.empty() in function 'resize'

   

【问题讨论】:

【参考方案1】:

您测试图像的路径已损坏。该错误是由于cv2 尝试调整空numpy 数组的大小。您应该检查测试数据集的路径(确保image_paths 可以访问)

【讨论】:

感谢您的回复,我更正了我的测试图像路径,它现在可以工作了。但是,当我将图像大小调整为 (224,224) 以训练我的图像时,我得到与上面相同代码的以下错误,在问题中给出。我在InvalidArgumentError: Input to reshape is a tensor with 73728 values, but the requested shape requires a multiple of 6272 [[node sequential/flatten/Reshape (defined at &lt;ipython-input-28-c69365c15e16&gt;:6) ]] [Op:__inference_train_function_1206] Function call stack: train_function下面附上错误@ 那是因为您测试的图像尺寸与您训练的图像尺寸不同。确保测试的图像分辨率与训练的图像分辨率完全相同。 仅用于训练,我尝试将它们从原始大小 (2160,3840,3) 调整为 (224,224),它给了我上面第一条评论中提到的重塑错误的输入。但是当我将它们调整为 (150,150,3) 时,它可以正常工作,没有任何错误。我该如何解决这个错误? 你不能。您在分辨率 (150,150,3) 上进行了训练,在这种情况下,您必须在相同的分辨率上进行测试。 我的意思是,如果我只在训练过程中将其重塑为 (224,224,3) 而不是 (150,150,3) ,而不接触测试图像。在将模型拟合到火车图像上时,我得到了InvalidArgumentError: Input to reshape is a tensor with 73728 values, but the requested shape requires a multiple of 6272 [[node sequential/flatten/Reshape (defined at &lt;ipython-input-28-c69365c15e16&gt;:6) ]] [Op:__inference_train_function_1206] Function call stack: train_function

以上是关于如何解决函数'resize'中的错误(-215:断言失败)!ssize.empty()?的主要内容,如果未能解决你的问题,请参考以下文章

CV2 图像错误:错误:(-215:断言失败)!ssize.empty() in function 'cv::resize'

调整图像大小时出错:“错误:(-215:Assertion failed) func != 0 in function 'resize'”

错误:OpenCV(4.1.0) 错误:(-215:Assertion failed) !ssize.empty() in function 'cv::resize'

cv2.error: (-215:Assertion failed) !ssize.empty() in function ‘resize‘解决方案

OpenCV 在大图像上调整大小失败,出现“错误:(-215) ssize.area() > 0 in function cv::resize”

error: (-215:Assertion failed) !ssize.empty() in function ‘cv::resize‘分析及解决方案