pivot_wider 问题“`values_from` 中的值不是唯一标识的;输出将包含 list-cols”

Posted

技术标签:

【中文标题】pivot_wider 问题“`values_from` 中的值不是唯一标识的;输出将包含 list-cols”【英文标题】:pivot_wider issue "Values in `values_from` are not uniquely identified; output will contain list-cols" 【发布时间】:2020-03-09 06:57:30 【问题描述】:

我的数据如下所示:

# A tibble: 6 x 4
  name          val time          x1
  <chr>       <dbl> <date>     <dbl>
1 C Farolillo     7 2016-04-20  51.5
2 C Farolillo     3 2016-04-21  56.3
3 C Farolillo     7 2016-04-22  56.3
4 C Farolillo    13 2016-04-23  57.9
5 C Farolillo     7 2016-04-24  58.7
6 C Farolillo     9 2016-04-25  59.0

我正在尝试使用pivot_wider 函数来扩展基于name 列的数据。我使用以下代码:

yy <- d %>% 
  pivot_wider(., names_from = name, values_from = val)

这给了我以下警告信息:

Warning message:
Values in `val` are not uniquely identified; output will contain list-cols.
* Use `values_fn = list(val = list)` to suppress this warning.
* Use `values_fn = list(val = length)` to identify where the duplicates arise
* Use `values_fn = list(val = summary_fun)` to summarise duplicates

输出如下:

       time       x1        out1    out2 
    2016-04-20  51.50000    <dbl>   <dbl>
2   2016-04-21  56.34615    <dbl>   <dbl>
3   2016-04-22  56.30000    <dbl>   <dbl>
4   2016-04-23  57.85714    <dbl>   <dbl>
5   2016-04-24  58.70968    <dbl>   <dbl>
6   2016-04-25  58.96774    <dbl>   <dbl>

我知道here 提到了这个问题,他们建议使用汇总统计来解决这个问题。但是我有时间序列数据,因此不想使用汇总统计,因为每一天都有一个值(而不是多个值)。

我知道问题是因为val 列有重复项(即在上面的示例中,7 出现了 3 次。

关于如何pivot_wider 和克服这个问题的任何建议?

数据:

    d <- structure(list(name = c("C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", "C Farolillo", 
"C Farolillo", "C Farolillo", "C Farolillo", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", "Plaza Eliptica", 
"Plaza Eliptica", "Plaza Eliptica"), val = c(7, 3, 7, 13, 7, 
9, 20, 19, 4, 5, 5, 2, 6, 6, 16, 13, 7, 6, 3, 3, 6, 10, 5, 3, 
5, 3, 4, 4, 10, 11, 4, 13, 8, 2, 8, 10, 3, 10, 14, 4, 2, 4, 6, 
6, 8, 8, 3, 3, 13, 10, 13, 32, 25, 31, 34, 26, 33, 35, 43, 22, 
22, 21, 10, 33, 33, 48, 47, 27, 23, 11, 13, 25, 31, 20, 16, 10, 
9, 23, 11, 23, 26, 16, 34, 17, 4, 24, 21, 10, 26, 32, 10, 5, 
9, 19, 14, 27, 27, 10, 8, 28, 32, 25), time = structure(c(16911, 
16912, 16913, 16914, 16915, 16916, 16917, 16918, 16919, 16920, 
16921, 16922, 16923, 16923, 16924, 16925, 16926, 16927, 16928, 
16929, 16930, 16931, 16932, 16933, 16934, 16935, 16936, 16937, 
16938, 16939, 16940, 16941, 16942, 16943, 16944, 16945, 16946, 
16947, 16948, 16949, 16950, 16951, 16952, 16953, 16954, 16955, 
16956, 16957, 16958, 16959, 16960, 16911, 16912, 16913, 16914, 
16915, 16916, 16917, 16918, 16919, 16920, 16921, 16922, 16923, 
16923, 16924, 16925, 16926, 16927, 16928, 16929, 16930, 16931, 
16932, 16933, 16934, 16935, 16936, 16937, 16938, 16939, 16940, 
16941, 16942, 16943, 16944, 16945, 16946, 16947, 16948, 16949, 
16950, 16951, 16952, 16953, 16954, 16955, 16956, 16957, 16958, 
16959, 16960), class = "Date"), x1 = c(51.5, 56.3461538461538, 
56.3, 57.8571428571429, 58.7096774193548, 58.9677419354839, 64.4615384615385, 
61.9310344827586, 60.3214285714286, 59.4137931034483, 59.5806451612903, 
57.3448275862069, 64.0333333333333, 64.0333333333333, 70.15625, 
71.3636363636364, 62.8125, 56.4375, 56.4516129032258, 51.741935483871, 
52.84375, 53.09375, 52.969696969697, 54, 54.3870967741936, 60.3870967741936, 
64.4516129032258, 66.2903225806452, 68.2333333333333, 69.7741935483871, 
70.5806451612903, 73.8275862068966, 72.8181818181818, 64.6764705882353, 
64.4838709677419, 68.7741935483871, 62.1764705882353, 68.969696969697, 
70.1935483870968, 59.6774193548387, 59.9677419354839, 63.125, 
67.5882352941177, 71.4705882352941, 73.8529411764706, 76.1935483870968, 
72.6451612903226, 76.0645161290323, 76.4193548387097, 81.7741935483871, 
85.0645161290323, 51.5, 56.3461538461538, 56.3, 57.8571428571429, 
58.7096774193548, 58.9677419354839, 64.4615384615385, 61.9310344827586, 
60.3214285714286, 59.4137931034483, 59.5806451612903, 57.3448275862069, 
64.0333333333333, 64.0333333333333, 70.15625, 71.3636363636364, 
62.8125, 56.4375, 56.4516129032258, 51.741935483871, 52.84375, 
53.09375, 52.969696969697, 54, 54.3870967741936, 60.3870967741936, 
64.4516129032258, 66.2903225806452, 68.2333333333333, 69.7741935483871, 
70.5806451612903, 73.8275862068966, 72.8181818181818, 64.6764705882353, 
64.4838709677419, 68.7741935483871, 62.1764705882353, 68.969696969697, 
70.1935483870968, 59.6774193548387, 59.9677419354839, 63.125, 
67.5882352941177, 71.4705882352941, 73.8529411764706, 76.1935483870968, 
72.6451612903226, 76.0645161290323, 76.4193548387097, 81.7741935483871, 
85.0645161290323)), class = c("tbl_df", "tbl", "data.frame"), row.names = c(NA, 
-102L))

【问题讨论】:

【参考方案1】:

虽然在 OP 示例中不可见,但在某些情况下,接受的答案会在不需要时重复行。这种方法在某些情况下避免了这种情况:

d %>%
  pivot_wider(names_from = name, values_from = val) %>% 
    unnest(cols = everything() )

要取消警告,请使用:

`%W>%` <- function(lhs,rhs)
  w <- options()$warn
  on.exit(options(warn=w))
  options(warn=-1)
  eval.parent(substitute(lhs %>% rhs))
 # https://***.com/questions/47475923/custom-pipe-to-silence-warnings

但是,如果列表大小不同,它将失败

【讨论】:

【参考方案2】:

这在游戏中有点晚了,但可以选择保留非唯一的观察结果,但仍然是关键:

table(d$name) # get the unique names_from and frequencies
# 
#    C Farolillo Plaza Eliptica 
#             51             51  

(d2 <- d %>% mutate(rno = rep(1:51, 2)) %>% 
                  # repeat 1:51 2 times; unique id by names_from

      pivot_wider(names_from = name, values_from = val))
    # # A tibble: 51 × 5
    #    time          x1   rno `C Farolillo` `Plaza Eliptica`
    #    <date>     <dbl> <int>         <dbl>            <dbl>
    #  1 2016-04-20  51.5     1             7               32
    #  2 2016-04-21  56.3     2             3               25
    #  3 2016-04-22  56.3     3             7               31
    #  4 2016-04-23  57.9     4            13               34
    #  5 2016-04-24  58.7     5             7               26
    #  6 2016-04-25  59.0     6             9               33
    #  7 2016-04-26  64.5     7            20               35
    #  8 2016-04-27  61.9     8            19               43
    #  9 2016-04-28  60.3     9             4               22
    # 10 2016-04-29  59.4    10             5               22
    # # … with 41 more rows 

【讨论】:

【参考方案3】:

我猜,您的数据集中的重复是无意中发生的。 第 13/14 行是完全相同的观察结果。只需更正数据集。 您可以查看您的 d 和 yy 数据集以查看有问题的观察结果。

【讨论】:

【参考方案4】:

问题是由于您想要扩展/透视更宽的数据具有重复的标识符这一事实引起的。虽然上面的两个建议,即使用 mutate(row = row_number()) 从行号创建一个唯一的人工 id,或者只过滤 distinct 行都可以让你更广泛地旋转,但它们会改变你的表格的结构,这可能有一个逻辑,下次你尝试加入任何东西时会出现组织问题。

显式使用id_cols 参数是一种更好的做法,以查看您是否真的希望在旋转宽后必须是唯一的,如果遇到问题,请先重新组织原始表。当然,您可能会找到过滤到不同行或添加新 ID 的原因,很可能您希望在代码的前面避免重复。

【讨论】:

我遇到了与上述问题类似的问题,但这些解决方案似乎都不适用于我。我很可能有重复的值,因为我的数据涉及不同时间点的不同评级。我尝试使用 id_cols 但这也不起作用。 在这种情况下,显然您的观察必须是独一无二的,尤其是及时。所以 id_cols 必须考虑所有可能的时间观察。实现此目的的一种方法是将 _ 我已经尝试过了,但不确定如何在使用 pivot_wider 之前先以长格式进行操作。出于某种原因,多次为两次观察分配了相同的 ID 号。 所以我不想去掉重复的,相反,我想更改重复的id号 @ConDes 你尝试过类似:df_wide %&gt;% group_by(old_ID, time_point) %&gt;% mutate(new_ID = paste0(old_ID, "_", 0:n())) 吗?【参考方案5】:

通常是错误

Warning message:
Values in `val` are not uniquely identified; output will contain list-cols.

最常见的原因是数据中的重复行(排除 val 列之后),而不是 val 列中的重复。

which(duplicated(d))
# [1] 14 65

OP 的数据似乎有两个重复的行,这导致了这个问题。删除重复的行也可以消除错误。

yy <- d %>% distinct() %>% pivot_wider(., names_from = name, values_from = val)
yy
# A tibble: 50 x 4
   time          x1 `C Farolillo` `Plaza Eliptica`
   <date>     <dbl>         <dbl>            <dbl>
 1 2016-04-20  51.5             7               32
 2 2016-04-21  56.3             3               25
 3 2016-04-22  56.3             7               31
 4 2016-04-23  57.9            13               34
 5 2016-04-24  58.7             7               26
 6 2016-04-25  59.0             9               33
 7 2016-04-26  64.5            20               35
 8 2016-04-27  61.9            19               43
 9 2016-04-28  60.3             4               22
10 2016-04-29  59.4             5               22
# ... with 40 more rows

【讨论】:

我不会将其他解决方案称为快速/肮脏的解决方案,因为在许多有效的情况下,如果每个时间点允许多个值,这是正确的方法,但因为 OP 说每个时间点应该只有一个值,您的解决方案解决了重复条目的问题。 同意,如果存在仅在 value 列中不同的行,我可以看到它会有多大用处。 删除数据集中的重复行会导致我丢失时间序列信息。数据包含两个不同的时间序列C FarolilloPlaza Eliptica,它们恰好在同一天具有相同的值。这不是真正的重复,只是巧合。 尝试d[c(13,14),] 会得到以下两行:[1] 13 C Farolillo 6 2016-05-02 64.03333 [2] 14 C Farolillo 6 2016-05-02 64.03333。对于C Farolillo,这是一天中的两次相同观察;所以对我来说它看起来像是重复的。为另一对做d[c(64,65),] 我认为这是正确的答案。可以在透视之前以某种方式聚合重复的行。例如,我们可以使用group_by(name, x1) %&gt;% summarise(x1 = sum(x1)) 或用mean 代替sum。至少,这是我在实践中经常遇到的用例。【参考方案6】:

为每个name 创建一个唯一标识符行,然后使用pivot_wider

library(dplyr)

d %>%
  group_by(name) %>%
  mutate(row = row_number()) %>%
  tidyr::pivot_wider(names_from = name, values_from = val) %>%
  select(-row)

# A tibble: 51 x 4
#   time          x1 `C Farolillo` `Plaza Eliptica`
#   <date>     <dbl>         <dbl>            <dbl>
# 1 2016-04-20  51.5             7               32
# 2 2016-04-21  56.3             3               25
# 3 2016-04-22  56.3             7               31
# 4 2016-04-23  57.9            13               34
# 5 2016-04-24  58.7             7               26
# 6 2016-04-25  59.0             9               33
# 7 2016-04-26  64.5            20               35
# 8 2016-04-27  61.9            19               43
# 9 2016-04-28  60.3             4               22
#10 2016-04-29  59.4             5               22
# … with 41 more rows

【讨论】:

以上是关于pivot_wider 问题“`values_from` 中的值不是唯一标识的;输出将包含 list-cols”的主要内容,如果未能解决你的问题,请参考以下文章

pivot_wider 问题“`values_from` 中的值不是唯一标识的;输出将包含 list-cols”

优化 expand/pivot_wider R 以标记时间段的所有月份

R语言tidyr包pivot_longer函数pivot_wider函数数据表变换实战(长表到宽表宽表到长表)

R语言使用across函数一次性将多个数据列进行离散化(categorize):或者pivot_longer函数转化为长表对转化为长表的数值数据列进行离散化pivot_wider将数据转化为宽表

反转图例的顺序

什么是问题管理