逻辑回归代码

Posted ziseweilai

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了逻辑回归代码相关的知识,希望对你有一定的参考价值。

代码

1. 逻辑回归

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression

import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# 1.获取数据
names = ['Sample code number', 'Clump Thickness', 'Uniformity of Cell Size', 'Uniformity of Cell Shape',
                   'Marginal Adhesion', 'Single Epithelial Cell Size', 'Bare Nuclei', 'Bland Chromatin',
                   'Normal Nucleoli', 'Mitoses', 'Class']

data = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/breast-cancer-wisconsin/breast-cancer-wisconsin.data",
                  names=names)
# 2.基本数据处理
# 2.1 缺失值处理
data = data.replace(to_replace="?", value=np.NaN)
data = data.dropna()
# 2.2 确定特征值,目标值
x = data.iloc[:, 1:10]
y = data["Class"]
# 2.3 分割数据
x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=22)
# 3.特征工程(标准化)
transfer = StandardScaler()
x_train = transfer.fit_transform(x_train)
x_test = transfer.transform(x_test)
# 4.机器学习(逻辑回归)
estimator = LogisticRegression()
estimator.fit(x_train, y_train)
# 5.模型评估
y_predict = estimator.predict(x_test)
print("正确率score为", estimator.score(x_test, y_test))

2.简单特征提取

from sklearn.feature_extraction import DictVectorizer


def dict_demo():
    """
    对字典类型的数据进行特征抽取
    :return: None
    """
    data = [{'city': '北京','temperature':100}, {'city': '上海','temperature':60}, {'city': '深圳','temperature':30}]
    # 1、实例化一个转换器类
    transfer = DictVectorizer(sparse=False)
    # 2、调用fit_transform
    data = transfer.fit_transform(data)
    print("返回的结果:
", data)
    # 打印特征名字
    print("特征名字:
", transfer.get_feature_names())

    return None


if __name__ == '__main__':
    dict_demo()

3.文本特征提取

from sklearn.feature_extraction.text import CountVectorizer


def text_count_demo():
    """
    对文本进行特征抽取,countvetorizer
    :return: None
    """
    data = ["life is short,i like like python",
            "life is too long,i dislike python"]
    # 1、实例化一个转换器类
    # transfer = CountVectorizer(sparse=False) # 注意,没有sparse这个参数
    transfer = CountVectorizer()
    # 2、调用fit_transform
    data = transfer.fit_transform(data)
    print("文本特征抽取的结果:
", data.toarray())
    print("返回特征名字:
", transfer.get_feature_names())

    return None


if __name__ == '__main__':
    text_count_demo()

4.中文特征提取

from sklearn.feature_extraction.text import CountVectorizer
import jieba


def cut_word(text):
    """
    对中文进行分词
    "我爱北京天安门"————>"我 爱 北京 天安门"
    :param text:
    :return: text
    """
    # 用结巴对中文字符串进行分词
    text = " ".join(list(jieba.cut(text)))

    return text


def text_chinese_count_demo2():
    """
    对中文进行特征抽取
    :return: None
    """
    data = ["一种还是一种今天很残酷,明天更残酷,后天很美好,但绝对大部分是死在明天晚上,所以每个人不要放弃今天。",
            "我们看到的从很远星系来的光是在几百万年之前发出的,这样当我们看到宇宙时,我们是在看它的过去。",
            "如果只用一种方式了解某样事物,你就不会真正了解它。了解事物真正含义的秘密取决于如何将其与我们所了解的事物相联系。"]
    # 将原始数据转换成分好词的形式
    text_list = []
    for sent in data:
        text_list.append(cut_word(sent))
    print(text_list)

    # 1、实例化一个转换器类
    # transfer = CountVectorizer(sparse=False)
    transfer = CountVectorizer()
    # 2、调用fit_transform
    data = transfer.fit_transform(text_list)
    print("文本特征抽取的结果:
", data.toarray())
    print("返回特征名字:
", transfer.get_feature_names())

    return None


if __name__ == '__main__':
    text_chinese_count_demo2()

以上是关于逻辑回归代码的主要内容,如果未能解决你的问题,请参考以下文章

分类算法 之 逻辑回归--理论+案例+代码

温州大学《机器学习》课程代码逻辑回归

逻辑回归代码

如何用matlab模拟一个逻辑回归的方程啊,求大神帮忙写代码

MATLAB逻辑回归实例及代码

[机器学习与scikit-learn-20]:算法-逻辑回归-线性逻辑回归linear_model.LogisticRegression与代码实现