关键错误:[Int64Index...] dtype='int64] 均不在列中
Posted
技术标签:
【中文标题】关键错误:[Int64Index...] dtype=\'int64] 均不在列中【英文标题】:Key Error: None of [Int64Index...] dtype='int64] are in the columns关键错误:[Int64Index...] dtype='int64] 均不在列中 【发布时间】:2019-09-04 03:16:10 【问题描述】:我正在尝试使用 np.random.shuffle() 方法洗牌我的索引,但我不断收到一个我不明白的错误。如果有人能帮我解决这个问题,我将不胜感激。谢谢!
我在开始创建 raw_csv_data 变量时尝试使用 delimiter=',' 和 delim_whitespace=0,因为我认为这是另一个问题的解决方案,但它一直抛出相同的错误
import pandas as pd
import numpy as np
from sklearn.preprocessing import StandardScaler
#%%
raw_csv_data= pd.read_csv('Absenteeism-data.csv')
print(raw_csv_data)
#%%
df= raw_csv_data.copy()
print(display(df))
#%%
pd.options.display.max_columns=None
pd.options.display.max_rows=None
print(display(df))
#%%
print(df.info())
#%%
df=df.drop(['ID'], axis=1)
#%%
print(display(df.head()))
#%%
#Our goal is to see who is more likely to be absent. Let's define
#our targets from our dependent variable, Absenteeism Time in Hours
print(df['Absenteeism Time in Hours'])
print(df['Absenteeism Time in Hours'].median())
#%%
targets= np.where(df['Absenteeism Time in Hours']>df['Absenteeism Time
in Hours'].median(),1,0)
#%%
print(targets)
#%%
df['Excessive Absenteeism']= targets
#%%
print(df.head())
#%%
#Let's Separate the Day and Month Values to see if there is
correlation
#between Day of week/month with absence
print(type(df['Date'][0]))
#%%
df['Date']= pd.to_datetime(df['Date'], format='%d/%m/%Y')
#%%
print(df['Date'])
print(type(df['Date'][0]))
#%%
#Extracting the Month Value
print(df['Date'][0].month)
#%%
list_months=[]
print(list_months)
#%%
print(df.shape)
#%%
for i in range(df.shape[0]):
list_months.append(df['Date'][i].month)
#%%
print(list_months)
#%%
print(len(list_months))
#%%
#Let's Create a Month Value Column for df
df['Month Value']= list_months
#%%
print(df.head())
#%%
#Now let's extract the day of the week from date
df['Date'][699].weekday()
#%%
def date_to_weekday(date_value):
return date_value.weekday()
#%%
df['Day of the Week']= df['Date'].apply(date_to_weekday)
#%%
print(df.head())
#%%
df= df.drop(['Date'], axis=1)
#%%
print(df.columns.values)
#%%
reordered_columns= ['Reason for Absence', 'Month Value','Day of the
Week','Transportation Expense', 'Distance to Work', 'Age',
'Daily Work Load Average', 'Body Mass Index', 'Education',
'Children',
'Pets',
'Absenteeism Time in Hours', 'Excessive Absenteeism']
#%%
df=df[reordered_columns]
print(df.head())
#%%
#First Checkpoint
df_date_mod= df.copy()
#%%
print(df_date_mod)
#%%
#Let's Standardize our inputs, ignoring the Reasons and Education
Columns
#Because they are labelled by a separate categorical criteria, not
numerically
print(df_date_mod.columns.values)
#%%
unscaled_inputs= df_date_mod.loc[:, ['Month Value','Day of the
Week','Transportation Expense','Distance to Work','Age','Daily Work
Load
Average','Body Mass Index','Children','Pets','Absenteeism Time in
Hours']]
#%%
print(display(unscaled_inputs))
#%%
absenteeism_scaler= StandardScaler()
#%%
absenteeism_scaler.fit(unscaled_inputs)
#%%
scaled_inputs= absenteeism_scaler.transform(unscaled_inputs)
#%%
print(display(scaled_inputs))
#%%
print(scaled_inputs.shape)
#%%
scaled_inputs= pd.DataFrame(scaled_inputs, columns=['Month Value','Day
of the Week','Transportation Expense','Distance to Work','Age','Daily
Work Load Average','Body Mass Index','Children','Pets','Absenteeism
Time
in Hours'])
print(display(scaled_inputs))
#%%
df_date_mod= df_date_mod.drop(['Month Value','Day of the
Week','Transportation Expense','Distance to Work','Age','Daily Work
Load Average','Body Mass Index','Children','Pets','Absenteeism Time in
Hours'], axis=1)
print(display(df_date_mod))
#%%
df_date_mod=pd.concat([df_date_mod,scaled_inputs], axis=1)
print(display(df_date_mod))
#%%
df_date_mod= df_date_mod[reordered_columns]
print(display(df_date_mod.head()))
#%%
#Checkpoint
df_date_scale_mod= df_date_mod.copy()
print(display(df_date_scale_mod.head()))
#%%
#Let's Analyze the Reason for Absence Category
print(df_date_scale_mod['Reason for Absence'])
#%%
print(df_date_scale_mod['Reason for Absence'].min())
print(df_date_scale_mod['Reason for Absence'].max())
#%%
print(df_date_scale_mod['Reason for Absence'].unique())
#%%
print(len(df_date_scale_mod['Reason for Absence'].unique()))
#%%
print(sorted(df['Reason for Absence'].unique()))
#%%
reason_columns= pd.get_dummies(df['Reason for Absence'])
print(reason_columns)
#%%
reason_columns['check']= reason_columns.sum(axis=1)
print(reason_columns)
#%%
print(reason_columns['check'].sum(axis=0))
#%%
print(reason_columns['check'].unique())
#%%
reason_columns=reason_columns.drop(['check'], axis=1)
print(reason_columns)
#%%
reason_columns=pd.get_dummies(df_date_scale_mod['Reason for Absence'],
drop_first=True)
print(reason_columns)
#%%
print(df_date_scale_mod.columns.values)
#%%
print(reason_columns.columns.values)
#%%
df_date_scale_mod= df_date_scale_mod.drop(['Reason for Absence'],
axis=1)
print(df_date_scale_mod)
#%%
reason_type_1= reason_columns.loc[:, 1:14].max(axis=1)
reason_type_2= reason_columns.loc[:, 15:17].max(axis=1)
reason_type_3= reason_columns.loc[:, 18:21].max(axis=1)
reason_type_4= reason_columns.loc[:, 22:].max(axis=1)
#%%
print(reason_type_1)
print(reason_type_2)
print(reason_type_3)
print(reason_type_4)
#%%
print(df_date_scale_mod.head())
#%%
df_date_scale_mod= pd.concat([df_date_scale_mod,
reason_type_1,reason_type_2, reason_type_3, reason_type_4], axis=1)
print(df_date_scale_mod.head())
#%%
print(df_date_scale_mod.columns.values)
#%%
column_names= ['Month Value','Day of the Week','Transportation
Expense',
'Distance to Work','Age','Daily Work Load Average','Body Mass Index',
'Education','Children','Pets','Absenteeism Time in Hours',
'Excessive Absenteeism', 'Reason_1', 'Reason_2', 'Reason_3',
'Reason_4']
df_date_scale_mod.columns= column_names
print(df_date_scale_mod.head())
#%%
column_names_reordered= ['Reason_1', 'Reason_2', 'Reason_3',
'Reason_4','Month Value','Day of the Week','Transportation Expense',
'Distance to Work','Age','Daily Work Load Average','Body Mass Index',
'Education','Children','Pets','Absenteeism Time in Hours',
'Excessive Absenteeism']
df_date_scale_mod=df_date_scale_mod[column_names_reordered]
print(display(df_date_scale_mod.head()))
#%%
#Checkpoint
df_date_scale_mod_reas= df_date_scale_mod.copy()
print(df_date_scale_mod_reas.head())
#%%
#Let's Look at the Education column now
print(df_date_scale_mod_reas['Education'].unique())
#This shows us that education is rated from 1-4 based on level
#of completion
#%%
print(df_date_scale_mod_reas['Education'].value_counts())
#The overwhelming majority of workers are highschool educated, while
the
#rest have higher degrees
#%%
#We'll create our dummy variables as highschool and higher education
df_date_scale_mod_reas['Education']=
df_date_scale_mod_reas['Education'].map(1:0, 2:1, 3:1, 4:1)
#%%
print(df_date_scale_mod_reas['Education'].unique())
#%%
print(df_date_scale_mod_reas['Education'].value_counts())
#%%
#Checkpoint
df_preprocessed= df_date_scale_mod_reas.copy()
print(display(df_preprocessed.head()))
#%%
#%%
#Split Inputs from targets
scaled_inputs_all= df_preprocessed.loc[:,'Reason_1':'Absenteeism Time
in
Hours']
print(display(scaled_inputs_all.head()))
print(scaled_inputs_all.shape)
#%%
targets_all= df_preprocessed.loc[:,'Excessive Absenteeism']
print(display(targets_all.head()))
print(targets_all.shape)
#%%
#Shuffle Inputs and targets
shuffled_indices= np.arange(scaled_inputs_all.shape[0])
np.random.shuffle(shuffled_indices)
shuffled_inputs= scaled_inputs_all[shuffled_indices]
shuffled_targets= targets_all[shuffled_indices]
这是我尝试洗牌时不断遇到的错误:
KeyError Traceback (most recent call last) in 1 shuffled_indices= np.arange(scaled_inputs_all.shape[0]) 2 np.random.shuffle(shuffled_indices) ----> 3 shuffled_inputs= scaled_inputs_all[shuffled_indices] 4 shuffled_targets= targets_all[shuffled_indices]
~\Anaconda3\lib\site-packages\pandas\core\frame.py 在 getitem(self, key) 2932 key = list(key) 2933 indexer = self.loc._convert_to_indexer(key, axis=1, -> 2934 raise_missing=True) 2935 2936 # take() 不接受 布尔索引器
~\Anaconda3\lib\site-packages\pandas\core\indexing.py 在 _convert_to_indexer(self, obj, axis, is_setter, raise_missing) 1352 kwargs = 'raise_missing': True if is_setter else 1353 raise_missing -> 1354 return self._get_listlike_indexer(obj, axis, **kwargs)[1] 1355 else: 1356 try:
~\Anaconda3\lib\site-packages\pandas\core\indexing.py 在 _get_listlike_indexer(self, key, axis, raise_missing) 1159 self._validate_read_indexer(keyarr, indexer, 1160 o._get_axis_number(轴), -> 1161 raise_missing=raise_missing) 1162 返回 keyarr,索引器 第1163章
~\Anaconda3\lib\site-packages\pandas\core\indexing.py 在 _validate_read_indexer(self,key,indexer,axis,raise_missing)1244 raise KeyError(1245 u"[key] 中没有一个在 [axis]".format( -> 1246 key=key,axis=self.obj._get_axis_name(axis))) 1247 1248 #我们 (暂时)允许 .loc 中缺少一些键,但在
中除外KeyError: "没有 [Int64Index([560, 320, 405, 141, 154, 370, 656, 26, 444, 307,\n ...\n 429, 542, 676, 588, 315, 284, 293, 607, 197, 250],\n dtype='int64', length=700)] 是 在[列]中”
【问题讨论】:
具有几行数据框的可重现示例将有助于研究此问题。 @NileshIngle 你想让我把我正在使用的数据集发给你吗? 【参考方案1】:您使用loc
创建了您的scaled_inputs_all
DataFrame
函数,所以它很可能不包含连续的索引。
另一方面,您将 shuffled_indices
创建为随机播放
来自一系列连续数字。
记住scaled_inputs_all[shuffled_indices]
获取行
scaled_inputs_all
的 索引值 等于
shuffled_indices
的元素。
也许你应该写:
scaled_inputs_all.iloc[shuffled_indices]
请注意,iloc
提供基于整数位置的索引,无论
索引值,即正是您所需要的。
【讨论】:
天哪,它成功了!谢谢!你能再解释一下为什么吗?楼上的解释我没看懂 我还需要写targets_all.iloc[shuffled_indices] 吗? 是的,但要确保两个 DataFrame 包含相同的行数。否则你可能会得到 index out of range 错误。也许索引的范围应该不同。 我在运行以下代码时遇到了同样的错误。你能告诉我如何解决它 skf = StratifiedKFold(n_splits=10, shuffle=True, random_state=0) for train_index, test_index in skf.split(x, y): x_train, x_test = x[train_index], x[test_index ] y_train, y_test = y[train_index], y[test_index]【参考方案2】:可能有人在使用 KFOLD 进行机器学习时也会遇到同样的错误。
解决方法如下:
Click here to watch solutinon
你需要使用 iloc:
X_train, X_test = X.iloc[train_index], X.iloc[test_index]
y_train, y_test = y.iloc[train_index], y.iloc[test_index]
【讨论】:
【参考方案3】:我也有这个问题。我通过将数据框和系列更改为数组来解决它。
尝试以下代码行:
scaled_inputs_all.iloc[shuffled_indices].values
【讨论】:
【参考方案4】:如果您在从数据框中删除行后重置索引,这应该会停止关键错误。
你可以在运行df.drop
后运行这个来做到这一点:
df = df.reset_index(drop=True)
或者,等效地:
df.reset_index(drop=True, inplace=True)
【讨论】:
【参考方案5】:遇到同样的错误:
KeyError: "None of [Int64Index([26], dtype='int64')] are in the [index]"
通过将数据框保存到本地文件并打开来解决,
如下:
df.to_csv('Step1.csv',index=False)
df = pd.read_csv('Step1.csv')
【讨论】:
【参考方案6】:根据列值条件删除具有索引的行时发生以下错误:
return self._engine.get_loc(key) 文件“pandas/_libs/index.pyx”,行 107、在pandas._libs.index.IndexEngine.get_loc文件中 “pandas/_libs/index.pyx”,第 131 行,在 pandas._libs.index.IndexEngine.get_loc 文件 “pandas/_libs/hashtable_class_helper.pxi”,第 992 行,在 pandas._libs.hashtable.Int64HashTable.get_item 文件 “pandas/_libs/hashtable_class_helper.pxi”,第 998 行,在 pandas._libs.hashtable.Int64HashTable.get_item KeyError: 226
在处理上述异常的过程中,又发生了一个异常:
Traceback(最近一次调用最后一次):
要解决此问题,请创建一个索引列表并立即删除行,如下所示:
df.drop(index=list1,labels=None, axis=0, inplace=True,columns=None, level=None, errors='raise')
【讨论】:
以上是关于关键错误:[Int64Index...] dtype='int64] 均不在列中的主要内容,如果未能解决你的问题,请参考以下文章
接收 KeyError:“[Int64Index([ ... dtype='int64', length=1323)] 均不在 [columns] 中”
KeyError:“[Int64Index dtype='int64', length=9313)] 都不在 [columns] 中”
AttributeError:“Int64Index”对象没有属性“月”
KeyError: “None of [Int64Index([...], dtype=‘int64‘, length=739)] are in the [columns]“
仅对 DatetimeIndex、TimedeltaIndex 或 PeriodIndex 有效,但获得了“Int64Index”实例