机器学习week7 ex6 review

Posted

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了机器学习week7 ex6 review相关的知识,希望对你有一定的参考价值。

机器学习week7 ex6 review

 

这周使用支持向量机(supprot vector machine)来做一个简单的垃圾邮件分类。

Support vector machine

1.1 Example dataset 1

ex6.m首先载入ex6data1.mat中的数据绘图:

%% =============== Part 1: Loading and Visualizing Data ================
%  We start the exercise by first loading and visualizing the dataset. 
%  The following code will load the dataset into your environment and plot
%  the data.
%

fprintf(\'Loading and Visualizing Data ...\\n\')

% Load from ex6data1: 
% You will have X, y in your environment
load(\'ex6data1.mat\');

% Plot training data
plotData(X, y);

fprintf(\'Program paused. Press enter to continue.\\n\');
pause;

image_1busf0lcl12kq1sn5q7lmgp1act9.png-25kB

接着用已经写好的SVM来训练,取C=1:

%% ==================== Part 2: Training Linear SVM ====================
%  The following code will train a linear SVM on the dataset and plot the
%  decision boundary learned.
%

% Load from ex6data1: 
% You will have X, y in your environment
load(\'ex6data1.mat\');

fprintf(\'\\nTraining Linear SVM ...\\n\')

% You should try to change the C value below and see how the decision
% boundary varies (e.g., try C = 1000)
C = 1;
model = svmTrain(X, y, C, @linearKernel, 1e-3, 20);
visualizeBoundaryLinear(X, y, model);

fprintf(\'Program paused. Press enter to continue.\\n\');
pause;

image_1buspmri0d4ub6q1s1618591hosm.png-28.4kB
可以看到此时左上角的一个十字点被错误地分类了。

如果改变C=100,则得到如下图形:
image_1busqdu8e1map38s11ekv5i1atq1j.png-27.8kB
此时training example中的所有点都被正确分类了,但是这条线很显然并不正确。

1.2 SVM with Gaussian Kernels

1.2.1 Gaussian kernel

Gaussian kernel function:
image_1bust6v0v15ssv39a5c81olg3t.png-20.4kB
求两点距离在MATLAB中可以直接使用pdist函数,在Octave中如果下载了Octave-forge中的statistics包也可以使用。这里Octave下载package的方法。
故完成gaussiankernel.m代码如下:

function sim = gaussianKernel(x1, x2, sigma)
%RBFKERNEL returns a radial basis function kernel between x1 and x2
%   sim = gaussianKernel(x1, x2) returns a gaussian kernel between x1 and x2
%   and returns the value in sim

% Ensure that x1 and x2 are column vectors
x1 = x1(:); x2 = x2(:);

% You need to return the following variables correctly.
sim = 0;

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the similarity between x1
%               and x2 computed using a Gaussian kernel with bandwidth
%               sigma
%
%

x = [x1\';x2\'];      % make x1 x2 as row vectors and put together as a matrix
dis = pdist(x);     % using the pdist function in statistics package 
sim = exp(-dis^2/(2*sigma^2));



% =============================================================

end

1.2.2 Example Dataset 2

根据刚刚完成的求Gaussian Kernel的函数,利用已经写好的SVM可以对ex6data2.mat中的数据进行分类。
ex6.m先对数据进行可视化如下:
image_1buvcb5bc17ol171a1gefkim6i69.png-89.4kB

经过使用了RBF kernelSVM训练后,可以画出如下边界:
image_1buvcessf1jqi1kul1gnsc7ts2qm.png-104.1kB

1.2.3 Example Dataset 3

需要使用交叉验证集求出最合适的Csigma
Csigma的验证范围都是[0.01 0.03 0.1 0.3 1 3 10 30], 也就是说需要进行64次计算,来得到最合适的Csigma,并用于对训练集的计算中。
完成函数dataset3Params.m如下:

function [C, sigma] = dataset3Params(X, y, Xval, yval)
%EX6PARAMS returns your choice of C and sigma for Part 3 of the exercise
%where you select the optimal (C, sigma) learning parameters to use for SVM
%with RBF kernel
%   [C, sigma] = EX6PARAMS(X, y, Xval, yval) returns your choice of C and 
%   sigma. You should complete this function to return the optimal C and 
%   sigma based on a cross-validation set.
%

% You need to return the following variables correctly.
C = 1;
sigma = 0.3;

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return the optimal C and sigma
%               learning parameters found using the cross validation set.
%               You can use svmPredict to predict the labels on the cross
%               validation set. For example, 
%                   predictions = svmPredict(model, Xval);
%               will return the predictions on the cross validation set.
%
%  Note: You can compute the prediction error using 
%        mean(double(predictions ~= yval))
%

train_values = [0.01 0.03 0.1 0.3 1 3 10 30];
sigma_values = [0.01 0.03 0.1 0.3 1 3 10 30];

for i=1:length(train_values)
    for j=1:length(train_values)
        C = train_values(i);
        sigma = train_values(j);
        model = svmTrain(X, y, C, @(x1, x2) gaussianKernel(x1, x2, sigma)); 
        predictions = svmPredict(model, Xval);
        predictions_error(i,j) = mean(double(predictions ~= yval));
    end
end

mm = min(min(predictions_error));
[i j] = find(predictions_error == mm);
C = train_values(i)
sigma = train_values(j)

% Answer is C = 1 and sigma = 0.1

% =========================================================================

end

最后得到使得cross validation error最小的Csigma分别为10.1

绘制训练集散点图如下:
image_1buvgf5lu12281106110h1k0dnipm.png-43.1kB

使用上述Csigma,求出边界如下图:
image_1buvg6duni9r1jo37utlv91iqf9.png-47.6kB


2. Spam classification

使用SVM进行垃圾邮件分类。
y=1 表示是垃圾邮件,y = 0 表示不是垃圾邮件。同时还需要将每封邮件转换成一个特征向量 x \\in \\mathbb{R}^n
数据集来自SpamAssassin Public Corpus。这次简化的分类中,不考虑邮件的标题,只对正文进行分类。

2.1 Preprocessing Emails

需要对邮件中的内容进行Normalization:
image_1buvhh03d73273tss427r1p0f1g.png-169.7kB
即将正文中所有字母换成小写、去掉所有HTML的tag、把所有URL替换成“httpaddr”、所有Email地址替换成“emailaddr”,所有数字替换成“number”、所有美元符号 \\$ 替换成“dollar”、单词词根化、去掉非单词字符(包括标点符号,tab newline spaces均用一个空格替代)。
例如一封邮件本来含有如下内容:
image_1buvopkb0eov3ir9vscvqj01t.png-48.2kB

经过上述处理后,则变为如下内容:
image_1buvorf5n1jug5i1g5i1k5l19pm2a.png-33.3kB

2.2 Vocabulary list

在这个简化的垃圾邮件分类中,我们只选择了最常用的单词。因为不常用的词汇只在很少一部分邮件中出现,如果增加它们作为feature,可能会导致overfitting。
在文件vocab.txt中包含了全部单词表,部分截图如下:
image_1buvp4p19ssds981bnp1akc1lp2n.png-49.8kB
这些词都是在spam corpus中出现超过100次的单词,共有1899个。在实际操作中,一般使用10000-50000的单词表。

补充processEmail.m的代码,完成后如下:

function word_indices = processEmail(email_contents)
%PROCESSEMAIL preprocesses a the body of an email and
%returns a list of word_indices 
%   word_indices = PROCESSEMAIL(email_contents) preprocesses 
%   the body of an email and returns a list of indices of the 
%   words contained in the email. 
%

% Load Vocabulary
vocabList = getVocabList();

% Init return value
word_indices = [];

% ========================== Preprocess Email ===========================

% Find the Headers ( \\n\\n and remove )
% Uncomment the following lines if you are working with raw emails with the
% full headers

% hdrstart = strfind(email_contents, ([char(10) char(10)]));
% email_contents = email_contents(hdrstart(1):end);

% Lower case
email_contents = lower(email_contents);

% Strip all html
% Looks for any expression that starts with < and ends with > and replace
% and does not have any < or > in the tag it with a space
email_contents = regexprep(email_contents, \'<[^<>]+>\', \' \');

% Handle Numbers
% Look for one or more characters between 0-9
email_contents = regexprep(email_contents, \'[0-9]+\', \'number\');

% Handle URLS
% Look for strings starting with http:// or https://
email_contents = regexprep(email_contents, ...
                           \'(http|https)://[^\\s]*\', \'httpaddr\');

% Handle Email Addresses
% Look for strings with @ in the middle
email_contents = regexprep(email_contents, \'[^\\s]+@[^\\s]+\', \'emailaddr\');

% Handle $ sign
email_contents = regexprep(email_contents, \'[$]+\', \'dollar\');

% ========================== Tokenize Email ===========================

% Output the email to screen as well
fprintf(\'\\n==== Processed Email ====\\n\\n\');

% Process file
l = 0;

while ~isempty(email_contents)

    % Tokenize and also get rid of any punctuation
    [str, email_contents] = ...
       strtok(email_contents, ...
              [\' @$/#.-:&*+=[]?!(){},\'\'">_<;%\' char(10) char(13)]);

    % Remove any non alphanumeric characters
    str = regexprep(str, \'[^a-zA-Z0-9]\', \'\');

    % Stem the word 
    % (the porterStemmer sometimes has issues, so we use a try catch block)
    try str = porterStemmer(strtrim(str)); 
    catch str = \'\'; continue;
    end;

    % Skip the word if it is too short
    if length(str) < 1
       continue;
    end

    % Look up the word in the dictionary and add to word_indices if
    % found
    % ====================== YOUR CODE HERE ======================
    % Instructions: Fill in this function to add the index of str to
    %               word_indices if it is in the vocabulary. At this point
    %               of the code, you have a stemmed word from the email in
    %               the variable str. You should look up str in the
    %               vocabulary list (vocabList). If a match exists, you
    %               should add the index of the word to the word_indices
    %               vector. Concretely, if str = \'action\', then you should
    %               look up the vocabulary list to find where in vocabList
    %               \'action\' appears. For example, if vocabList{18} =
    %               \'action\', then, you should add 18 to the word_indices 
    %               vector (e.g., word_indices = [word_indices ; 18]; ).
    % 
    % Note: vocabList{idx} returns a the word with index idx in the
    %       vocabulary list.
    % 
    % Note: You can use strcmp(str1, str2) to compare two strings (str1 and
    %       str2). It will return 1 only if the two strings are equivalent.
    %

    index = find(strcmp(vocabList,str) == 1);    % find the index of str in vocablist
    word_indices = [word_indices; index];         % add index to  word_indices





    % =============================================================

    % Print to screen, ensuring that the output lines are not too long
    if (l + length(str) + 1) > 78
        fprintf(\'\\n\');
        l = 0;
    end
    fprintf(\'%s \', str);
    l = l + length(str) + 1;

end

% Print footer
fprintf(\'\\n\\n=========================\\n\');

end

经过处理的样例如下所示:
image_1buvr0vjctd3r1n1afu1oqm1a153h.png-28kB

Extracting features from Emails

将上述经过处理的邮件表示成特征向量 x, 其中 x_i \\in \\{0,1\\}, 分别代表单词表中的 i 号单词是否出现在了邮件中。即我们需要把一封邮件转化为如下的向量形式:
image_1buvs6vd91v634kg17ra10577ct3u.png-8.8kB
完成emailFeatures.m如下:

function x = emailFeatures(word_indices)
%EMAILFEATURES takes in a word_indices vector and produces a feature vector
%from the word indices
%   x = EMAILFEATURES(word_indices) takes in a word_indices vector and 
%   produces a feature vector from the word indices. 

% Total number of words in the dictionary
n = 1899;

% You need to return the following variables correctly.
x = zeros(n, 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Fill in this function to return a feature vector for the
%               given email (word_indices). To help make it easier to 
%               process the emails, we have have already pre-processed each
%               email and converted each word in the email into an index in
%               a fixed dictionary (of 1899 words). The variable
%               word_indices contains the list of indices of the words
%               which occur in one email.
% 
%               Concretely, if an email has the text:
%
%                  The quick brown fox jumped over the lazy dog.
%
%               Then, the word_indices vector for this text might look 
%               like:
%               
%                   60  100   33   44   10     53  60  58   5
%
%               where, we have mapped each word onto a number, for example:
%
%                   the   -- 60
%                   quick -- 100
%                   ...
%
%              (note: the above numbers are just an example and are not the
%               actual mappings).
%
%              Your task is take one such word_indices vector and construct
%              a binary feature vector that indicates whether a particular
%              word occurs in the email. That is, x(i) = 1 when word i
%              is present in the email. Concretely, if the word \'the\' (say,
%              index 60) appears in the email, then x(60) = 1. The feature
%              vector should look like:
%
%              x = [ 0 0 0 0 1 0 0 0 ... 0 0 0 0 1 ... 0 0 0 1 0 ..];
%
%

x(word_indices) = 1; 





% =========================================================================


end

2.3 Training SVM for spam classification

训练集中提供了4000封非垃圾邮件和4000封垃圾邮件。而测试集中则有1000封邮件用于测试。
ex6_spam.m中使用SVM进行训练,并进行测试,分别输出训练集和测试集分类的准确率:

%% =========== Part 3: Train Linear SVM for Spam Classification ========
%  In this section, you will train a linear classifier to determine if an
%  email is Spam or Not-Spam.

% Load the Spam Email dataset
% You will have X, y in your environment
load(\'spamTrain.mat\');

fprintf(\'\\nTraining Linear SVM (Spam Classification)\\n\')
fprintf(\'(this may take 1 to 2 minutes) ...\\n\')

C = 0.1;
model = svmTrain(X, y, C, @linearKernel);

p = svmPredict(model, X);

fprintf(\'Training Accuracy: %f\\n\', mean(double(p == y)) * 100);

%% ================= Part 5: Top Predictors of Spam ====================
%  Since the model we are training is a linear SVM, we can inspect the
%  weights learned by the model to understand better how it is determining
%  whether an email is spam or not. The following code finds the words with
%  the highest weights in the classifier. Informally, the classifier
%  \'thinks\' that these words are the most likely indicators of spam.
%

% Sort the weights and obtin the vocabulary list
[weight, idx] = sort(model.w, \'descend\');
vocabList = getVocabList();

fprintf(\'\\nTop predictors of spam: \\n\');
for i = 1:15
    fprintf(\' %-15s (%f) \\n\', vocabList{idx(i)}, weight(i));
end

fprintf(\'\\n\\n\');
fprintf(\'\\nProgram paused. Press enter to continue.\\n\');
pause;

image_1buvsmq7k1lt9j8daqp2rh1n3q4b.png-15.2kB
结果如上图,正确率分别高达99.8%和98.9%。

2.4 Top predictors for spam

image_1buvsp5rmmepd262kr4b1b3b4o.png-17.4kB
经过训练我们发现上述单词对分辨是否是垃圾邮件的帮助效果最明显。

2.5 Optional(ungraded) exercise: Try your own emails

使用spamSample1.txt中的垃圾邮件样例,在ex6_spam中测试:
image_1bv00sec51tqmjsm1g7d419n5h6c.png-40.1kB
检验结果:
image_1buvvbjoi19cpipqn3rssc1sp5i.png-32.8kB
判断是垃圾邮件,正确。
再找一封非垃圾邮件(这里使用emailSample2.txt),如下:
image_1bv012s0s1eif1kgm1gp2196l13d37m.png-65.3kB
检验结果:
image_1bv0119sf3ba1n3110eb1ii818j6p.png-52.1kB
判断不是垃圾邮件,正确。

2.6 Optional(ungraded) exercise: Build up your own dataset

可以在SpamAssassin Public Corpus下载到数据集自行训练。也可以根据自己的数据集中的高频单词,建立新的单词表。还可以使用高度优化的SVM工具箱,如LIBSVM,点击这里可以下载。

iosIOS5n+WwseaYr+ivtOmc gOimgei/m+ihjCoqNjQqKuasoeiuoeeul++8jOadpeW+l+WIsOacgOWQiOmAgueahCoqQyoq5ZKM KipzaWdtYSoq77yM5bm255So5LqO5a+56K6t57uD6ZuG55qE6K6h566X5Lit44CCPGJyPuWujOaI kOWHveaVsGBkYXRhc2V0M1BhcmFtcy5tYOWmguS4i++8mjwvcD48cD5gYGBtYXRsYWI8YnI+ZnVu Y3Rpb24gW0MsIHNpZ21hXSA9IGRhdGFzZXQzUGFyYW1zKFgsIHksIFh2YWwsIHl2YWwpPGJyPiVF WDZQQVJBTVMgcmV0dXJucyB5b3VyIGNob2ljZSBvZiBDIGFuZCBzaWdtYSBmb3IgUGFydCAzIG9m IHRoZSBleGVyY2lzZTxicj4ld2hlcmUgeW91IHNlbGVjdCB0aGUgb3B0aW1hbCAoQywgc2lnbWEp IGxlYXJuaW5nIHBhcmFtZXRlcnMgdG8gdXNlIGZvciBTVk08YnI+JXdpdGggUkJGIGtlcm5lbDxi cj4lICAgW0MsIHNpZ21hXSA9IEVYNlBBUkFNUyhYLCB5LCBYdmFsLCB5dmFsKSByZXR1cm5zIHlv dXIgY2hvaWNlIG9mIEMgYW5kIDxicj4lICAgc2lnbWEuIFlvdSBzaG91bGQgY29tcGxldGUgdGhp cyBmdW5jdGlvbiB0byByZXR1cm4gdGhlIG9wdGltYWwgQyBhbmQgPGJyPiUgICBzaWdtYSBiYXNl ZCBvbiBhIGNyb3NzLXZhbGlkYXRpb24gc2V0Ljxicj4lPC9wPjxwPiUgWW91IG5lZWQgdG8gcmV0 dXJuIHRoZSBmb2xsb3dpbmcgdmFyaWFibGVzIGNvcnJlY3RseS48YnI+QyA9IDE7PGJyPnNpZ21h ID0gMC4zOzwvcD48cD4lID09PT09PT09PT09PT09PT09PT09PT0gWU9VUiBDT0RFIEhFUkUgPT09 PT09PT09PT09PT09PT09PT09PTxicj4lIEluc3RydWN0aW9uczogRmlsbCBpbiB0aGlzIGZ1bmN0 aW9uIHRvIHJldHVybiB0aGUgb3B0aW1hbCBDIGFuZCBzaWdtYTxicj4lICAgICAgICAgICAgICAg bGVhcm5pbmcgcGFyYW1ldGVycyBmb3VuZCB1c2luZyB0aGUgY3Jvc3MgdmFsaWRhdGlvbiBzZXQu PGJyPiUgICAgICAgICAgICAgICBZb3UgY2FuIHVzZSBzdm1QcmVkaWN0IHRvIHByZWRpY3QgdGhl IGxhYmVscyBvbiB0aGUgY3Jvc3M8YnI+JSAgICAgICAgICAgICAgIHZhbGlkYXRpb24gc2V0LiBG b3IgZXhhbXBsZSwgPGJyPiUgICAgICAgICAgICAgICAgICAgcHJlZGljdGlvbnMgPSBzdm1QcmVk aWN0KG1vZGVsLCBYdmFsKTs8YnI+JSAgICAgICAgICAgICAgIHdpbGwgcmV0dXJuIHRoZSBwcmVk aWN0aW9ucyBvbiB0aGUgY3Jvc3MgdmFsaWRhdGlvbiBzZXQuPGJyPiU8YnI+JSAgTm90ZTogWW91 IGNhbiBjb21wdXRlIHRoZSBwcmVkaWN0aW9uIGVycm9yIHVzaW5nIDxicj4lICAgICAgICBtZWFu KGRvdWJsZShwcmVkaWN0aW9ucyB+PSB5dmFsKSk8YnI+JTwvcD48cD50cmFpbl92YWx1ZXMgPSBb MC4wMSAwLjAzIDAuMSAwLjMgMSAzIDEwIDMwXTs8YnI+c2lnbWFfdmFsdWVzID0gWzAuMDEgMC4w MyAwLjEgMC4zIDEgMyAxMCAzMF07PC9wPjxwPmZvciBpPTE6bGVuZ3RoKHRyYWluX3ZhbHVlcyk8 YnI+ICAgIGZvciBqPTE6bGVuZ3RoKHRyYWluX3ZhbHVlcyk8YnI+ICAgICAgICBDID0gdHJhaW5f dmFsdWVzKGkpOzxicj4gICAgICAgIHNpZ21hID0gdHJhaW5fdmFsdWVzKGopOzxicj4gICAgICAg IG1vZGVsID0gc3ZtVHJhaW4oWCwgeSwgQywgQCh4MSwgeDIpIGdhdXNzaWFuS2VybmVsKHgxLCB4 Miwgc2lnbWEpKTsgPGJyPiAgICAgICAgcHJlZGljdGlvbnMgPSBzdm1QcmVkaWN0KG1vZGVsLCBY dmFsKTs8YnI+ICAgICAgICBwcmVkaWN0aW9uc19lcnJvcihpLGopID0gbWVhbihkb3VibGUocHJl ZGljdGlvbnMgfj0geXZhbCkpOzxicj4gICAgZW5kPGJyPmVuZDwvcD48cD5tbSA9IG1pbihtaW4o cHJlZGljdGlvbnNfZXJyb3IpKTs8YnI+W2kgal0gPSBmaW5kKHByZWRpY3Rpb25zX2Vycm9yID09 IG1tKTs8YnI+QyA9IHRyYWluX3ZhbHVlcyhpKTxicj5zaWdtYSA9IHRyYWluX3ZhbHVlcyhqKTwv cD48cD4lIEFuc3dlciBpcyBDID0gMSBhbmQgc2lnbWEgPSAwLjE8L3A+PHA+PGJyPiUgPT09PT09 PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09PT09 PT09PT09PT09PTwvcD48cD5lbmQ8YnI+YGBgPGJyPuacgOWQjuW+l+WIsOS9v+W+lyoqY3Jvc3Mg dmFsaWRhdGlvbiBlcnJvcioq5pyA5bCP55qEKipDKirlkowqKnNpZ21hKirliIbliKvkuLoqKjEq KuWSjCoqMC4xKirjgII8L3A+PHA+57uY5Yi26K6t57uD6ZuG5pWj54K55Zu+5aaC5LiL77yaPGJy PiFbaW1hZ2VfMWJ1dmdmNWx1MTIyODExMDYxMTBoMWswZG5pcG0ucG5nLTQzLjFrQl1bN108L3A+ PHA+5L2/55So5LiK6L+wKipDKirlkowqKnNpZ21hKirvvIzmsYLlh7rovrnnlYzlpoLkuIvlm77v vJo8YnI+IVtpbWFnZV8xYnV2ZzZkdW5pOXIxam8zN3V0bHY5MWlxZjkucG5nLTQ3LjZrQl1bOF08 L3A+PHA+LS0tLS0tPC9wPjxwPiMjIDIuIFNwYW0gY2xhc3NpZmljYXRpb248YnI+5L2/55SoKipT Vk0qKui/m+ihjOWeg+WcvumCruS7tuWIhuexu+OAgiA8YnI+JHk9MSQg6KGo56S65piv5Z6D5Zy+ 6YKu5Lu277yMJHkgPSAwJCDooajnpLrkuI3mmK/lnoPlnL7pgq7ku7bjgILlkIzml7bov5jpnIDo poHlsIbmr4/lsIHpgq7ku7bovazmjaLmiJDkuIDkuKrnibnlvoHlkJHph48gJHggXGluIFxtYXRo YmJ7Un1ebiTjgII8YnI+5pWw5o2u6ZuG5p2l6IeqKipTcGFtQXNzYXNzaW4gUHVibGljIENvcnB1 cyoq44CC6L+Z5qyh566A5YyW55qE5YiG57G75Lit77yM5LiN6ICD6JmR6YKu5Lu255qE5qCH6aKY 77yM5Y+q5a+55q2j5paH6L+b6KGM5YiG57G744CCPC9wPjxwPiMjIDIuMSBQcmVwcm9jZXNzaW5n IEVtYWlsczxicj7pnIDopoHlr7npgq7ku7bkuK3nmoTlhoXlrrnov5vooYwqKk5vcm1hbGl6YXRp b24qKjo8YnI+IVtpbWFnZV8xYnV2aGgwM2Q3MzI3M3RzczQyN3IxcDBmMWcucG5nLTE2OS43a0Jd WzldPGJyPuWNs+Wwhuato+aWh+S4reaJgOacieWtl+avjeaNouaIkOWwj+WGmeOAgeWOu+aOieaJ gOaciSoqSFRNTCoq55qEdGFn44CB5oqK5omA5pyJKipVUkwqKuabv+aNouaIkCoqImh0dHBhZGRy Iioq44CB5omA5pyJRW1haWzlnLDlnYDmm7/mjaLmiJAqKiJlbWFpbGFkZHIiKirvvIzmiYDmnInm lbDlrZfmm7/mjaLmiJAqKiJudW1iZXIiKirjgIHmiYDmnInnvo7lhYPnrKblj7cgJFwkJCDmm7/m jaLmiJAqKiJkb2xsYXIiKirjgIHljZXor43or43moLnljJbjgIHljrvmjonpnZ7ljZXor43lrZfn rKbvvIjljIXmi6zmoIfngrnnrKblj7fvvIx0YWIgbmV3bGluZSBzcGFjZXPlnYfnlKjkuIDkuKrn qbrmoLzmm7/ku6PvvInjgII8YnI+5L6L5aaC5LiA5bCB6YKu5Lu25pys5p2l5ZCr5pyJ5aaC5LiL 5YaF5a6577yaPGJyPiFbaW1hZ2VfMWJ1dm9wa2IwZW92M2lyOXZzY3ZxajAxdC5wbmctNDguMmtC XVsxMF08L3A+PHA+57uP6L+H5LiK6L+w5aSE5

以上是关于机器学习week7 ex6 review的主要内容,如果未能解决你的问题,请参考以下文章

吴恩达-coursera-机器学习-week7

Coursera机器学习week7 笔记

Coursera机器学习week7 编程作业

机器学习- 吴恩达Andrew Ng Week7 知识总结Support Vector Machines

机器学习- 吴恩达Andrew Ng 编程作业技巧 for Week7 Support Vector Machines

python 机器学习有用的代码片段