具有默认数据集和训练的形状预测器精度低
Posted
技术标签:
【中文标题】具有默认数据集和训练的形状预测器精度低【英文标题】:Low accuracy of shape predictor with default dataset and training 【发布时间】:2017-04-14 07:24:17 【问题描述】:我正在尝试使用 dlib 使用默认数据集 (/dlib-19.0/examples/faces/training_with_face_landmarks.xml
) 和默认训练样本 (train_shape_predictor_ex.cpp
) 来训练形状预测器。
所以我想训练形状预测器,它与默认形状预测器完全相同(shape_predictor_68_face_landmarks.dat
),因为我使用了相同的数据集和相同的训练代码。但我明白了一些问题。
训练后,我得到了我的 .dat
文件,大小为 16.6mb(但默认 dlib 预测器 shape_predictor_68_face_landmarks.dat
有 99.7mb)。
在测试了我的.dat
文件(16.6mb)后,我得到了低准确度,但是在测试了默认的.dat
文件(shape_predictor_68_face_landmarks.dat
,16.6mb)之后,我得到了高准确度。
我的形状预测器:
shape_predictor_68_face_landmarks.dat
:
培训:
#include <QCoreApplication>
#include <dlib/image_processing.h>
#include <dlib/data_io.h>
#include <iostream>
using namespace dlib;
using namespace std;
std::vector<std::vector<double> > get_interocular_distances (
const std::vector<std::vector<full_object_detection> >& objects
);
int main(int argc, char *argv[])
QCoreApplication a(argc, argv);
try
const std::string faces_directory = "/home/user/Documents/dlib-19.0/examples/faces/";
dlib::array<array2d<unsigned char> > images_train;
std::vector<std::vector<full_object_detection> > faces_train;
load_image_dataset(images_train, faces_train, faces_directory+"training_with_face_landmarks.xml");
shape_predictor_trainer trainer;
trainer.set_oversampling_amount(300);
trainer.set_nu(0.05);
trainer.set_tree_depth(2);
trainer.be_verbose();
shape_predictor sp = trainer.train(images_train, faces_train);
cout << "mean training error: "<<
test_shape_predictor(sp, images_train, faces_train, get_interocular_distances(faces_train)) << endl;
serialize(faces_directory+"sp_default_settings.dat") << sp;
catch (exception& e)
cout << "\nexception thrown!" << endl;
cout << e.what() << endl;
return a.exec();
double interocular_distance (
const full_object_detection& det
)
dlib::vector<double,2> l, r;
double cnt = 0;
// Find the center of the left eye by averaging the points around
// the eye.
for (unsigned long i = 36; i <= 41; ++i)
l += det.part(i);
++cnt;
l /= cnt;
// Find the center of the right eye by averaging the points around
// the eye.
cnt = 0;
for (unsigned long i = 42; i <= 47; ++i)
r += det.part(i);
++cnt;
r /= cnt;
// Now return the distance between the centers of the eyes
return length(l-r);
std::vector<std::vector<double> > get_interocular_distances (
const std::vector<std::vector<full_object_detection> >& objects
)
std::vector<std::vector<double> > temp(objects.size());
for (unsigned long i = 0; i < objects.size(); ++i)
for (unsigned long j = 0; j < objects[i].size(); ++j)
temp[i].push_back(interocular_distance(objects[i][j]));
return temp;
测试:
#include <QCoreApplication>
#include <dlib/image_processing/frontal_face_detector.h>
#include <dlib/image_processing/render_face_detections.h>
#include <dlib/image_processing.h>
#include <dlib/gui_widgets.h>
#include <dlib/image_io.h>
#include <dlib/data_io.h>
#include <iostream>
using namespace dlib;
using namespace std;
int main(int argc, char *argv[])
QCoreApplication a(argc, argv);
try
// We need a face detector. We will use this to get bounding boxes for
// each face in an image.
frontal_face_detector detector = get_frontal_face_detector();
// And we also need a shape_predictor. This is the tool that will predict face
// landmark positions given an image and face bounding box. Here we are just
// loading the model from the shape_predictor_68_face_landmarks.dat file you gave
// as a command line argument.
shape_predictor sp;
deserialize("/home/user/Downloads/muct-master/samples/sp_default_settings.dat") >> sp;
string srcDir = "/home/user/Downloads/muct-master/samples/selection/";
string dstDir = "/home/user/Downloads/muct-master/samples/my_results_default/";
std::vector<string> vecOfImg;
vecOfImg.push_back("i001qa-mn.jpg");
vecOfImg.push_back("i002ra-mn.jpg");
vecOfImg.push_back("i003ra-fn.jpg");
vecOfImg.push_back("i003sa-fn.jpg");
vecOfImg.push_back("i004qa-mn.jpg");
vecOfImg.push_back("i004ra-mn.jpg");
vecOfImg.push_back("i005ra-fn.jpg");
vecOfImg.push_back("i006ra-mn.jpg");
vecOfImg.push_back("i007qa-fn.jpg");
vecOfImg.push_back("i008ra-mn.jpg");
vecOfImg.push_back("i009qa-mn.jpg");
vecOfImg.push_back("i009ra-mn.jpg");
vecOfImg.push_back("i009sa-mn.jpg");
vecOfImg.push_back("i010qa-mn.jpg");
vecOfImg.push_back("i010sa-mn.jpg");
vecOfImg.push_back("i011qa-mn.jpg");
vecOfImg.push_back("i011ra-mn.jpg");
vecOfImg.push_back("i012ra-mn.jpg");
vecOfImg.push_back("i012sa-mn.jpg");
vecOfImg.push_back("i014qa-fn.jpg");
for(int imgC = 0; imgC < vecOfImg.size(); imgC++)
array2d<rgb_pixel> img;
load_image(img, srcDir + vecOfImg.at(imgC));
// Make the image larger so we can detect small faces.
pyramid_up(img);
// Now tell the face detector to give us a list of bounding boxes
// around all the faces in the image.
std::vector<rectangle> dets = detector(img);
cout << "Number of faces detected: " << dets.size() << endl;
// Now we will go ask the shape_predictor to tell us the pose of
// each face we detected.
std::vector<full_object_detection> shapes;
for (unsigned long j = 0; j < dets.size(); ++j)
full_object_detection shape = sp(img, dets[j]);
cout << "number of parts: "<< shape.num_parts() << endl;
cout << "pixel position of first part: " << shape.part(0) << endl;
cout << "pixel position of second part: " << shape.part(1) << endl;
for(unsigned long i = 0; i < shape.num_parts(); i++)
draw_solid_circle(img, shape.part(i), 2, rgb_pixel(100,255,100));
save_jpeg(img, dstDir + vecOfImg.at(imgC));
// You get the idea, you can get all the face part locations if
// you want them. Here we just store them in shapes so we can
// put them on the screen.
shapes.push_back(shape);
catch (exception& e)
cout << "\nexception thrown!" << endl;
cout << e.what() << endl;
return a.exec();
如果我使用默认数据集和示例,默认与我的训练和测试有什么区别?我如何将形状预测器训练为 shape_predictor_68_face_landmarks.dat?
【问题讨论】:
即使您在 sourceforge 页面上提出了问题(但没有得到答案),那里仍然有很多信息,很确定这个问题已经讨论过了 :) 【参考方案1】:示例数据集 (/dlib-19.0/examples/faces/training_with_face_landmarks.xml) 太小,无法训练高质量模型。这不是 dlib 附带的模型所训练的。
示例使用小型数据集以使示例运行得更快。所有示例的重点是解释 dlib API,而不是有用的程序。它们只是文档。您可以使用 dlib API 做一些有趣的事情。
【讨论】:
您能告诉我如何获得dlib.shape_predictor()
的单帧地标定位丢失吗?【参考方案2】:
它正在生成一个 16.6MB 的 DAT 文件,因为您要么使用少量图像进行训练,要么没有使用正确的设置。
根据this Github issue,您在训练过程中没有使用最佳/默认设置。
在您的设置中,训练器的过采样量非常高(300),默认为 20。 您还通过增加正则化(使 nu 参数更小)和使用深度更小的树来降低模型的容量。
您的 nu 参数:0.05。默认为 0.1
您的树深度:2。默认为 4
通过更改参数并通过反复试验进行训练,您会发现较小文件大小的最佳精度。
请记住,每个训练过程大约需要 45 分钟,并且您至少需要一台 16GB RAM 的计算机。
【讨论】:
以上是关于具有默认数据集和训练的形状预测器精度低的主要内容,如果未能解决你的问题,请参考以下文章
R语言plotly可视化:使用plotly可视化数据划分后的训练集和测试集使用不同的形状标签表征训练集测试集以及数据集的分类标签整个数据空间的分类边界轮廓线(等高线)多分类模型的预测置信度
R语言plotly可视化:使用plotly可视化数据划分后的训练集和测试集使用不同的形状标签表征训练集测试集以及数据集的分类标签(Display training and test split