重新排序行值csv pandas

Posted

技术标签:

【中文标题】重新排序行值csv pandas【英文标题】:Reorder row values csv pandas 【发布时间】:2019-05-10 03:34:09 【问题描述】:

我有一个 csv 文件

1 , name , 1012B-Amazon , 2044C-Flipcart , Bosh27-Walmart
2 , name , Kelvi20-Flipcart, LG-Walmart   
3,  name , Kenstar-Walmart, Sony-Amazon , Kenstar-Flipcart
4, name ,  LG18-Walmart, Bravia-Amazon

我需要网站重新排列行,即-之后的部分;

1, name , 1012B-Amazon , 2044C-Flipcart , Bosh27-Walmart
2, name ,              , Kelv20-Flipcart, LG-Walmart
3, name , Sony-Amazon,  Kenstar-Flipcart ,Kenstar-Walmart
4, name , Bravia-Amazon,                 ,LG18-Walmart 

可以使用 pandas 吗?找到一个字符串的存在并重新排列它并遍历所有行并为下一个字符串重复此操作?我浏览了Series.str.containsstr.extract 的文档,但找不到解决方案。

【问题讨论】:

你已经有一个数据框了吗? 不,我添加了 csv 、 pandas 和 numpy 并将其读取到 df 可能重复:***.com/questions/11067027/… @YOLO 对不起,我不希望它们按列重新排列,但按行数据重新排列 第二行只有5列是格式还是最后一列是空字符串? 【参考方案1】:

sortedkey 一起使用

df.iloc[:,1:].apply(lambda x : sorted(x,key=lambda y: (y=='',y)),1)
     2    3    4    5
1  ABC  DEF  GHI  JKL
2  ABC  DEF  GHI     
3  ABC  DEF  GHI  JKL
#df.iloc[:,1:]=df.iloc[:,1:].apply(lambda x : sorted(x,key=lambda y: (y=='',y)),1)

既然你提到reindex 我认为get_dummies 会起作用

s=pd.get_dummies(df.iloc[:,1:],prefix ='',prefix_sep='')
s=s.drop('',1)
df.iloc[:,1:]=s.mul(s.columns).values
df
      1    2    3    4    5
1  name  ABC  DEF  GHI  JKL
2  name  ABC  DEF  GHI     
3  name  ABC  DEF  GHI  JKL

【讨论】:

我收到错误TypeError: ("'<' not supported between instances of 'float' and 'str'", 'occurred at index 0') 对不起,我是 python 新手 @AnoopD 你有 NaN 用 df=df.fillna('') df1=df.iloc[:,1:].apply(lambda x : sorted(x,key=lambda y: (y=='',y)),1) `df1`0 [ ABC, DEF, ]1 [ ABC, DEF, GHI]dtype: object ValueError: could not broadcast input array from shape (2,5) into shape (2,3) 。我使用了第一个 df 。 @AnoopD 这是我这边的工作。也许尝试将您的数据框准备成熊猫并向我们展示数据框?【参考方案2】:

假设空值为np.nan

# Fill in the empty values with some string to allow sorting
df.fillna('NaN', inplace=True)

# Flatten the dataframe, do the sorting and reshape back to a dataframe
pd.DataFrame(list(map(sorted, df.values)))

     0    1    2    3
0  ABC  DEF  GHI  JKL
1  ABC  DEF  GHI  NaN
2  ABC  DEF  GHI  JKL

更新

鉴于问题的更新和示例数据如下

df = pd.DataFrame('name': ['name1', 'name2', 'name3', 'name4'],
                   'b': ['1012B-Amazon', 'Kelvi20-Flipcart', 'Kenstar-Walmart', 'LG18-Walmart'],
                   'c': ['2044C-Flipcart', 'LG-Walmart', 'Sony-Amazon', 'Bravia-Amazon'],
                   'd': ['Bosh27-Walmart', np.nan, 'Kenstar-Flipcart', np.nan])

可能的解决方案是

def foo(df, retailer):

    # Find cells that contain the name of the retailer
    mask = df.where(df.apply(lambda x: x.str.contains(retailer)), '')

    # Squash the resulting mask into a series
    col = mask.max(skipna=True, axis=1)

    # Optional: trim the name of the retailer
    col = col.str.replace(f'-retailer', '')
    return col

df_out = pd.DataFrame(df['name'])
for retailer in ['Amazon', 'Walmart', 'Flipcart']:
    df_out[retailer] = foo(df, retailer)

导致

    name  Amazon  Walmart Flipcart
0  name1   1012B   Bosh27    2044C
1  name2               LG  Kelvi20
2  name3    Sony  Kenstar  Kenstar
3  name4  Bravia     LG18         

【讨论】:

对不起,这种方式不起作用,给出的数据只是虚拟数据,没有任何可排序的顺序。我需要的是使用regex 我必须找到每一行中的出现并重新排序。 如果不是按字母排序,那么您想要重新排列数据的规则是什么? 认为 pandas Series.str.contains 会起作用,但我不确定...... 你能说得更具体些吗?用正则表达式查找出现了什么?重新排序行到底如何? 1, W/M , 1012B-Amazon , 2044C-Flipcart , Bosh27-Walmart 2, R/F , Kelvi20-Flipcart, LG-Walmart 3, E/O , Kenstar-Walmart , Sony-Amazon , Kenstar-Flipcart 我需要将这些重新订购为1, W/M , 1012B-Amazon , 2044C-Flipcart , Bosh27-Walmart 2, R/F , ,Kelvi20-Flipcart, LG-Walmart 3, E/O , Sony-Amazon , Kenstar-Flipcart,Kenstar-Walmart `【参考方案3】:

问题更新后编辑:

这是 abc csv:

1,name,ABC,GHI,DEF,JKL
2,name,GHI,DEF,ABC,
3,name,JKL,GHI,ABC,DEF

这是公司的csv(要注意逗号):

1,name,1012B-Amazon,2044C-Flipcart,Bosh27-Walmart
2,name,Kelvi20-Flipcart,LG-Walmart,
3,name,Kenstar-Walmart,Sony-Amazon,Kenstar-Flipcart
4,name,LG18-Walmart,Bravia-Amazon,

这里是代码

import pandas as pd
import numpy as np


#These solution assume that each value that is not empty is not repeated
#within each row. If that is not the case for your data, it would be possible
#to do some transformations that the non empty values are unique for each row.    

#"get_company" returns the company if the value is non-empty and an
#empty value if the value was empty to begin with:
def get_company(company_item):
    if pd.isnull(company_item):
        return np.nan
    else:
        company=company_item.split('-')[-1]
        return company

#Using the "define_sort_order" function, one can retrieve a template to later
#sort all rows in the sort_abc_rows function. The template is derived from all
#values, aside from empty values, within the matrix when "by_largest_row" = False.
#One could also choose the single largest row to serve as the
#template for all other rows to follow. Both options work similarly when
#all rows are subsets of the largest row i.e. Every element in every
#other row (subset) can be found in the largest row (or set)

#The difference relates to, when the items contain unique elements,
#Whether one wants to create a table with all sorted elements serving
#as the columns, or whether one wants to simply exclude elements
#that are not in the largest row when at least one non-subset row does not exist 

#Rather than only having the application of returning the original data rows,
#one can get back a novel template with different values from that of the
#original dataset if one uses a function to operate on the template

def define_sort_order(data,by_largest_row = False,value_filtering_function = None):
    if not by_largest_row: 
        if value_filtering_function:
            data = data.applymap(value_filtering_function)
        #data.values returns a numpy array                 
        #with rows and columns. .flatten()
        #puts all elements in a 1 dim array
        #set gets all unique values in the array
        filtered_values = list(set((data.values.flatten())))
        filtered_values = [data_value for data_value in filtered_values if not_empty(data_value)]
        #sorted returns a list, even with np.arrays as inputs

        model_row = sorted(filtered_values)
    else:
        if value_filtering_function:
            data = data.applymap(value_filtering_function)
        row_lengths = data.apply(lambda data_row: data_row.notnull().sum(),axis = 1)
        #locates the numerical index for the row with the most non-empty elements:
        model_row_idx = row_lengths.idxmax()
    #sort and filter the row with the most values:
        filtered_values = list(set(data.iloc[model_row_idx]))

        model_row = [data_value for data_value in sorted(filtered_values) if not_empty(data_value)] 

    return model_row

#"not_empty" is used in the above function in order to filter list models that
#they no empty elements remain
def not_empty(value):
    return pd.notnull(value) and value not in ['','  ',None]

#Sorts all element in each _row within their corresponding position within the model row.
#elements in the model row that are missing from the current data_row are replaced with np.nan

def reorder_data_rows(data_row,model_row,check_by_function=None):
    #Here, we just apply the same function that we used to find the sorting order that
    #we computed when we originally #when we were actually finding the ordering of the model_row.
    #We actually transform the values of the data row temporarily to determine whether the
    #transformed value is in the model row. If so, we determine where, and order #the function
    #below in such a way.
    if check_by_function: 
        sorted_data_row = [np.nan]*len(model_row) #creating an empty vector that is the
                          #same length as the template, or model_row

        data_row = [value for value in data_row.values if not_empty(value)]

        for value in data_row:
            value_lookup = check_by_function(value)
            if value_lookup in model_row:
                idx = model_row.index(value_lookup)
                #placing company items in their respective row positions as indicated by
        #the model_row                #
                sorted_data_row[idx] = value    
    else:
        sorted_data_row = [value if value in data_row.values else np.nan for value in model_row]
    return pd.Series(sorted_data_row)

##################### ABC ######################
#Reading the data:
#the file will automatically include the header as the first row if this the  
#header = None option is not included. Note: "name" and the 1,2,3 columns are not in the index.
abc = pd.read_csv("abc.csv",header = None,index_col = None)
# Returns a sorted, non-empty list. IF you hard code the order you want,
# then you can simply put the hard coded order in the second input in model_row and avoid
# all functions aside from sort_abc_rows.
model_row = define_sort_order(abc.iloc[:,2:],False)

#applying the "define_sort_order" function we created earlier to each row before saving back into
#the original dataframe
#lambda allows us to create our own function without giving it a name.
#it is useful in this circumstance in order to use two inputs for sort_abc_rows


abc.iloc[:,2:] = abc.iloc[:,2:].apply(lambda abc_row: reorder_data_rows(abc_row,model_row),axis = 1).values

#Saving to a new csv that won't include the pandas created indices (0,1,2)
#or columns names (0,1,2,3,4):

abc.to_csv("sorted_abc.csv",header = False,index = False)
################################################


################## COMPANY #####################
company = pd.read_csv("company.csv",header=None,index_col=None)

model_row = define_sort_order(company.iloc[:,2:],by_largest_row = False,value_filtering_function=get_company)
#the only thing that changes here is that we tell the sort function what specific
#criteria to use to reorder each row by. We're using the result from the
#get_company function to do so. The custom function get_company, takes an input
#such as Kenstar-Walmart, and outputs Walmart (what's after the "-").
#we would then sort by the resulting list of companies. 

#Because we used the define_sort_order function to retrieve companies rather than company items in order,
#We need to use the same function to reorder each element in the DataFrame
company.iloc[:,2:] = company.iloc[:,2:].apply(lambda companies_row: reorder_data_rows(companies_row,model_row,check_by_function=get_company),axis=1).values
company.to_csv("sorted_company.csv",header = False,index = False)
#################################################

这是 sorted_abc.csv 的第一个结果:

1  name  ABC  DEF  GHI  JKL
2  name  ABC  DEF  GHI  NaN
3  name  ABC  DEF  GHI  JKL

修改代码后查询到后续表格, 这是由运行产生的 sorted_company.csv 脚本。

1  name    1012B-Amazon    2044C-Flipcart   Bosh27-Walmart
2  name             NaN  Kelvi20-Flipcart       LG-Walmart
3  name     Sony-Amazon  Kenstar-Flipcart  Kenstar-Walmart
4  name   Bravia-Amazon               NaN     LG18-Walmart

希望对你有帮助!

【讨论】:

感谢您的解决方案,确切的事情是有效的,但是如果您将第二行更改为 2,name,DEF,GHI,JKL 它将无法工作。 现在修改了。希望这对你有用。

以上是关于重新排序行值csv pandas的主要内容,如果未能解决你的问题,请参考以下文章

如何在 Pandas 数据框中按行值对日期时间列进行排序?

是否可以将 DB2 Zos v8 表行值重新排序为列

pandas使用read_csv函数读取csv数据sort_index函数基于多层列索引对数据排序(设置axis参数指定使用列索引对数据排序)

pandas使用read_csv函数读取csv数据sort_index函数基于多层列索引对数据排序(设置axis参数指定使用列索引对数据排序)

pandas使用read_csv函数读取csv数据sort_index函数基于多层行索引对数据排序(设置ascending参数列表指定不同层行索引的排序方向)

pandas使用read_csv函数读取csv数据sort_index函数基于多层行索引对数据排序(设置ascending参数列表指定不同层行索引的排序方向)