opencv FisherFaceRecognizer 的 train() 函数显示 TypeError: src is not a numpy array, not a scalar

Posted

技术标签:

【中文标题】opencv FisherFaceRecognizer 的 train() 函数显示 TypeError: src is not a numpy array, not a scalar【英文标题】:opencv FisherFaceRecognizer's train() function shows TypeError: src is not a numpy array, neither a scalar 【发布时间】:2018-01-23 08:37:17 【问题描述】:

我正在尝试通过针对特定的人脸图像训练 OpenCV 的 Fisher 人脸分类器来修改 following code。而且我不知道为什么下面的代码显示

Traceback (most recent call last):
  File "create_model.py", line 109, in <module>
    update(emotions)
  File "create_model.py", line 104, in update
    run_recognizer(emotions)
  File "create_model.py", line 101, in run_recognizer
    fishface.train(np.array(training_data), npar_trainlabs)
TypeError: src is not a numpy array, neither a scalar

training_data 包含 dlib 的 vectorized_landmarks,我将它们转换为 numpy 数组,training_labels 只是标签 1 或 2。

Traceback涉及的功能如下:

fishface = cv2.face.createFisherFaceRecognizer()
emotions = ["True", "Glasses"]

detector = dlib.get_frontal_face_detector()
predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")

def get_landmarks(image):
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    clahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))
    cimage = clahe.apply(gray)
    detections = detector(cimage, 1)
    landmarks_vectorised = []
    for k, d in enumerate(detections):  # For all detected face instances individually

        shape = predictor(cimage, d)  # Draw Facial Landmarks with the predictor class
        xlist = []
        ylist = []
        for i in range(1, 68):  # Store X and Y coordinates in two lists
            xlist.append(float(shape.part(i).x))
            ylist.append(float(shape.part(i).y))

        xmean = np.mean(xlist)  # Get the mean of both axes to determine centre of gravity
        ymean = np.mean(ylist)
        xcentral = [(x - xmean) for x in xlist]  # get distance between each point and the central point in both axes
        ycentral = [(y - ymean) for y in ylist]

        if xlist[26] == xlist[
            29]:  # If x-coordinates of the set are the same, the angle is 0, catch to prevent 'divide by 0' error in function
            anglenose = 0
        else:
            anglenose = int(math.atan((ylist[26] - ylist[29]) / (xlist[26] - xlist[29])) * 180 / math.pi)

        if anglenose < 0:
            anglenose += 90
        else:
            anglenose -= 90

        landmarks_vectorised = []

        if len(detections) < 1:
            landmarks_vectorised = "error"

        for x, y, w, z in zip(xcentral, ycentral, xlist, ylist):
            landmarks_vectorised.append(x)
            landmarks_vectorised.append(y)
            meannp = np.asarray((ymean, xmean))
            coornp = np.asarray((z, w))
            dist = np.linalg.norm(coornp - meannp)
            anglerelative = (math.atan((z - ymean) / (w - xmean)) * 180 / math.pi) - anglenose
            landmarks_vectorised.append(dist)
            landmarks_vectorised.append(anglerelative)
    return landmarks_vectorised


def make_sets(labels):
    training_data = []
    training_labels = []
    for label in labels:
        training = glob.glob("data\\%s\\*" % label)
        print(len(training))

        for item in training:
            try:
                image = cv2.imread(item)
            except:
                continue
            print(item)

            landmarks_vectorised = get_landmarks(image)

            if landmarks_vectorised == "error":
                print("error with landmarks")
                pass
            else:
                training_data.append(landmarks_vectorised)
                if str(label) == "True":
                    training_labels.append(2)
                elif str(label) == "Glasses":
                    training_labels.append(1)

    print("sets created")
    return training_data, training_labels

def make_sets(labels):
    training_data = []
    training_labels = []
    for label in labels:
        training = glob.glob("data\\%s\\*" % label)
        print(len(training))

        for item in training:
            try:
                image = cv2.imread(item)
            except:
                continue
            print(item)

            landmarks_vectorised = get_landmarks(image)

            if landmarks_vectorised == "error":
                print("error with landmarks")
                pass
            else:
                training_data.append(landmarks_vectorised)
                if str(label) == "True":
                    training_labels.append(2)
                elif str(label) == "Glasses":
                    training_labels.append(1)

    print("sets created")
    return training_data, training_labels


def run_recognizer(emotions):
    training_data, training_labels = make_sets(emotions)
    print("training fisher face classifier")
    print(type(training_data))
    print(type(training_labels))

    npar_train = np.array(training_data)
    npar_trainlabs = np.array(training_labels)
    fishface.train(np.array(training_data), npar_trainlabs)

def update(emotions):
    run_recognizer(emotions)
    fishface.save("glasses.xml")

update(emotions)

请帮我理解这个错误的含义。

【问题讨论】:

嗨!您需要为此问题添加更多信息/上下文。首先——我们需要知道哪一行抛出了 TypeError——你需要复制抛出错误时得到的整个 Backtrace。您还应该展示如何定义training_datatraining_labels。这很可能是失败的 3 块的前两行之一,而不是对 fishface.train() 的调用。 @JRichardSnape 感谢您的回复!我已经进行了必要的更改。请给我一些关于这些的反馈吗? 【参考方案1】:

尝试打印您的training_data 及其dtype,也许您可​​以将它们放在list 中,然后将列表转换为np.array。对标签执行相同的操作。

【讨论】:

嗨 - 这是评论而不是答案。这是一个很好的调试建议,但不能回答问题(主要是因为你还不能)。我倾向于同意它可能是 training_data 或不是数组类型的标签。 @Richard,谢谢你的提醒。我同意你的看法,training_data 可能是问题所在。 @JRichardSnape,Chenqi,谢谢,但从上面修改后的帖子中可以看出,它们都是列表类型。 嗨,麦琪。您能告诉我函数make_sets()get_landmarks(image) 的详细信息吗?我需要知道training_data 中的项目类型,而不是training_data 的类型。 @Chenqi 对不起,误会你了!我已经添加了这些功能。

以上是关于opencv FisherFaceRecognizer 的 train() 函数显示 TypeError: src is not a numpy array, not a scalar的主要内容,如果未能解决你的问题,请参考以下文章

cmake错误:opencv2/opencv.hpp:opencv2/opencv.hpp:没有这样的文件或目录

opencv相机标定

opencv4opencv视频教程 C++(opencv教程)1opencv介绍和环境搭建

如何使用opencv实现图像匹配

OpenCV 简介 携手走进 OpenCV 的世界

如何利用openCV函数查看opencv版本