泰坦尼克号生存预测

Posted darkchii

tags:

篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了泰坦尼克号生存预测相关的知识,希望对你有一定的参考价值。

  代码全部从Kaggle整理过来,仅做了一点点修改:

import pandas as pd
import numpy as np

from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import train_test_split

train_df = pd.read_csv(‘../data/train.csv‘)
test_df = pd.read_csv(‘../data/test.csv‘)

train_df = train_df.drop([‘Ticket‘, ‘Cabin‘], axis=1)
test_df = test_df.drop([‘Ticket‘, ‘Cabin‘], axis=1)
combine = [train_df, test_df]

for dataset in combine:
    dataset[‘Title‘] = dataset.Name.str.extract(‘ ([A-Za-z]+)\.‘, expand=False)

for dataset in combine:
    dataset[‘Title‘] = dataset[‘Title‘].replace([‘Lady‘, ‘Countess‘, ‘Capt‘, ‘Col‘,
                                                 ‘Don‘, ‘Dr‘, ‘Major‘, ‘Rev‘, ‘Sir‘, ‘Jonkheer‘, ‘Dona‘], ‘Rare‘)

    dataset[‘Title‘] = dataset[‘Title‘].replace(‘Mlle‘, ‘Miss‘)
    dataset[‘Title‘] = dataset[‘Title‘].replace(‘Ms‘, ‘Miss‘)
    dataset[‘Title‘] = dataset[‘Title‘].replace(‘Mme‘, ‘Mrs‘)

title_mapping = {"Mr": 1, "Miss": 2, "Mrs": 3, "Master": 4, "Rare": 5}
for dataset in combine:
    dataset[‘Title‘] = dataset[‘Title‘].map(title_mapping)
    dataset[‘Title‘] = dataset[‘Title‘].fillna(0)

train_df = train_df.drop([‘Name‘, ‘PassengerId‘], axis=1)
test_df = test_df.drop([‘Name‘], axis=1)
combine = [train_df, test_df]

for dataset in combine:
    dataset[‘Sex‘] = dataset[‘Sex‘].map({‘female‘: 1, ‘male‘: 0}).astype(int)

guess_ages = np.zeros((2, 3))

for dataset in combine:
    for i in range(0, 2):
        for j in range(0, 3):
            guess_df = dataset[(dataset[‘Sex‘] == i) &
                               (dataset[‘Pclass‘] == j + 1)][‘Age‘].dropna()

            # age_mean = guess_df.mean()
            # age_std = guess_df.std()
            # age_guess = rnd.uniform(age_mean - age_std, age_mean + age_std)

            age_guess = guess_df.median()

            # Convert random age float to nearest .5 age
            guess_ages[i, j] = int(age_guess / 0.5 + 0.5) * 0.5

    for i in range(0, 2):
        for j in range(0, 3):
            dataset.loc[(dataset.Age.isnull()) & (dataset.Sex == i) & (dataset.Pclass == j + 1),
                        ‘Age‘] = guess_ages[i, j]

    dataset[‘Age‘] = dataset[‘Age‘].astype(int)

train_df[‘AgeBand‘] = pd.cut(train_df[‘Age‘], 5)
for dataset in combine:
    dataset.loc[dataset[‘Age‘] <= 16, ‘Age‘] = 0
    dataset.loc[(dataset[‘Age‘] > 16) & (dataset[‘Age‘] <= 32), ‘Age‘] = 1
    dataset.loc[(dataset[‘Age‘] > 32) & (dataset[‘Age‘] <= 48), ‘Age‘] = 2
    dataset.loc[(dataset[‘Age‘] > 48) & (dataset[‘Age‘] <= 64), ‘Age‘] = 3
    dataset.loc[dataset[‘Age‘] > 64, ‘Age‘]

train_df = train_df.drop([‘AgeBand‘], axis=1)
combine = [train_df, test_df]

for dataset in combine:
    dataset[‘FamilySize‘] = dataset[‘SibSp‘] + dataset[‘Parch‘] + 1

for dataset in combine:
    dataset[‘IsAlone‘] = 0
    dataset.loc[dataset[‘FamilySize‘] == 1, ‘IsAlone‘] = 1

train_df = train_df.drop([‘Parch‘, ‘SibSp‘, ‘FamilySize‘], axis=1)
test_df = test_df.drop([‘Parch‘, ‘SibSp‘, ‘FamilySize‘], axis=1)
combine = [train_df, test_df]

for dataset in combine:
    dataset[‘Age*Class‘] = dataset[‘Age‘] * dataset[‘Pclass‘]

freq_port = train_df.Embarked.dropna().mode()[0]
for dataset in combine:
    dataset[‘Embarked‘] = dataset[‘Embarked‘].fillna(freq_port)

for dataset in combine:
    dataset[‘Embarked‘] = dataset[‘Embarked‘].map({‘S‘: 0, ‘C‘: 1, ‘Q‘: 2}).astype(int)

test_df[‘Fare‘].fillna(test_df[‘Fare‘].dropna().median(), inplace=True)
train_df[‘FareBand‘] = pd.qcut(train_df[‘Fare‘], 4)
for dataset in combine:
    dataset.loc[dataset[‘Fare‘] <= 7.91, ‘Fare‘] = 0
    dataset.loc[(dataset[‘Fare‘] > 7.91) & (dataset[‘Fare‘] <= 14.454), ‘Fare‘] = 1
    dataset.loc[(dataset[‘Fare‘] > 14.454) & (dataset[‘Fare‘] <= 31), ‘Fare‘] = 2
    dataset.loc[dataset[‘Fare‘] > 31, ‘Fare‘] = 3
    dataset[‘Fare‘] = dataset[‘Fare‘].astype(int)

train_df = train_df.drop([‘FareBand‘], axis=1)
combine = [train_df, test_df]

X_train = train_df.drop("Survived", axis=1)
Y_train = train_df["Survived"]
X_test = test_df.drop("PassengerId", axis=1).copy()

decision = DecisionTreeClassifier()
x_train, x_test, y_train, y_test = train_test_split(X_train, Y_train, test_size=0.2, random_state=0)

decision.fit(x_train, y_train)
print(decision.score(x_test, y_test))

   我在源码的基础上加了交叉验证提高了一点点分数。。。

以上是关于泰坦尼克号生存预测的主要内容,如果未能解决你的问题,请参考以下文章

泰坦尼克号生存预测

泰坦尼克号乘客生存预测(XGBoost)

决策树算法泰坦尼克号乘客生存预测

机器学习决策树算法泰坦尼克号乘客生存预测

机器学习决策树算法泰坦尼克号乘客生存预测

泰坦尼克号生存预测分析