dplyr:inner_join 与部分字符串匹配

Posted

技术标签:

【中文标题】dplyr:inner_join 与部分字符串匹配【英文标题】:dplyr: inner_join with a partial string match 【发布时间】:2022-01-17 18:41:09 【问题描述】:

如果数据框y 中的seed 列与x 中的string 列部分匹配,我想加入两个数据框。这个例子应该说明:

# What I have
x <- data.frame(idX=1:3, string=c("Motorcycle", "TractorTrailer", "Sailboat"))
y <- data_frame(idY=letters[1:3], seed=c("ractor", "otorcy", "irplan"))


x

  idX         string
1   1     Motorcycle
2   2 TractorTrailer
3   3       Sailboat

y

Source: local data frame [3 x 2]

    idY   seed
  (chr)  (chr)
1     a ractor
2     b otorcy
3     c irplan


# What I want
want <- data.frame(idX=c(1,2), idY=c("b", "a"), string=c("Motorcycle", "TractorTrailer"), seed=c("otorcy", "ractor"))

want

  idX idY         string   seed
1   1   b     Motorcycle otorcy
2   2   a TractorTrailer ractor

也就是说,类似

inner_join(x, y, by=stringr::str_detect(x$string, y$seed))

【问题讨论】:

我实际上是在尝试将一个数据帧中较长的核苷酸序列与另一个数据帧中的 miRNA 种子序列进行匹配。也许 Bioconductor Biostrings 包更有效,但不确定是否跨不同数据框加入。 问题的实际大小? # 种子/字符串和每个的长度? 嗨@MartinMorgan。在数据帧 X 中大约 10,000 个“字符串”(PAR-CLIP 簇序列)的测试用例中,并在数据帧 Y 中测试到大约 100 个“种子”(miRNA 反向互补种子序列),我在答案中使用的解决方案下面花了几分钟。慢,但可以忍受。实际大小可能高达 30,000 个字符串和 1000 个种子(30,000,000 行完全连接!)。我查看了 BioStrings,但无法让这些与 dplyr tbl/data.frames 很好地配合使用。 Dplyr 也不能很好地处理 DataFrame 对象。 【参考方案1】:

fuzzyjoin 库有两个函数 regex_inner_joinfuzzy_inner_join 允许您匹配部分字符串:

x <- data.frame(idX=1:3, string=c("Motorcycle", "TractorTrailer", "Sailboat"))
y <- data.frame(idY=letters[1:3], seed=c("ractor", "otorcy", "irplan"))
x$string = as.character(x$string)
y$seed = as.character(y$seed)


library(fuzzyjoin)
x %>% regex_inner_join(y, by = c(string = "seed"))

  idX         string idY   seed
1   1     Motorcycle   b otorcy
2   2 TractorTrailer   a ractor


library(stringr)
x %>% fuzzy_inner_join(y, by = c("string" = "seed"), match_fun = str_detect)


  idX         string idY   seed
1   1     Motorcycle   b otorcy
2   2 TractorTrailer   a ractor

【讨论】:

为了在大型表上获得更好的性能,您可以使用 stringi 包中的 match_fun = stri_detect_fixed 请注意,str_detect 将期望 string, pattern 而不是 pattern, string【参考方案2】:

您也可以将 base-r 与此功能一起使用(稍微改编自此答案:https://***.com/a/34723496/3048453,它使用 dplyr 将列绑定在一起,如果您不想使用 dplyr,请使用 cbind):

partial_join <- function(x, y, by_x, pattern_y)
 idx_x <- sapply(y[[pattern_y]], grep, x[[by_x]])
 idx_y <- sapply(seq_along(idx_x), function(i) rep(i, length(idx_x[[i]])))

 df <- dplyr::bind_cols(x[unlist(idx_x), , drop = F],
                        y[unlist(idx_y), , drop = F])
 return(df)

用你的例子

x <- data.frame(idX=1:3, string=c("Motorcycle", "TractorTrailer", "Sailboat"))
y <- data_frame(idY=letters[1:3], seed=c("ractor", "otorcy", "irplan"))

df_merged <- partial_join(x, y, by_x = "string", pattern_y = "seed")
df_merged
# # A tibble: 2 × 4
#     idX         string   idY   seed
#   <int>          <chr> <chr>  <chr>
# 1     1     Motorcycle     b otorcy
# 2     2 TractorTrailer     a ractor

速度基准:

功能

library(dplyr)
x <- data_frame(idX=1:3, string=c("Motorcycle", "TractorTrailer", "Sailboat"))
y <- data_frame(idY=letters[1:3], seed=c("ractor", "otorcy", "irplan"))

partial_join <- function(x, y, by_x, pattern_y) 
 idx_x <- sapply(y[[pattern_y]], grep, x[[by_x]])
 idx_y <- sapply(seq_along(idx_x), function(i) rep(i, length(idx_x[[i]])))

 df <- dplyr::bind_cols(x[unlist(idx_x), , drop = F],
                        y[unlist(idx_y), , drop = F])
 return(df)


partial_join(x, y, by_x = "string", pattern_y = "seed")
#> # A tibble: 2 × 4
#>     idX         string   idY   seed
#>   <int>          <chr> <chr>  <chr>
#> 1     1     Motorcycle     b otorcy
#> 2     2 TractorTrailer     a ractor

joran <- function(x, y, by_x, pattern_y) 
 library(dplyr)
 my_db <- src_sqlite(path = tempfile(), create= TRUE)
 x_tbl <- copy_to(dest = my_db, df = x)
 y_tbl <- copy_to(dest = my_db, df = y)

 result <- tbl(my_db, 
               sql(sprintf("select * from x, y where x.%s like '%%' || y.%s || '%%'", by_x, pattern_y)))
 collect(result, n = Inf)


joran(x, y, "string", "seed")
#> # A tibble: 2 × 4
#>     idX         string   idY   seed
#>   <int>          <chr> <chr>  <chr>
#> 1     1     Motorcycle     b otorcy
#> 2     2 TractorTrailer     a ractor

stephen <- function(x, y, by_x, pattern_y) 
 library(dplyr)
 d <- full_join(mutate(x, i=1), 
                mutate(y, i=1), by = "i")
 # quoting issue here, defaulting to base-r
 d$take <- stringr::str_detect(d[[by_x]], d[[pattern_y]])
 d %>% 
  filter(take == T) %>% 
  select(-i, -take)


stephen(x, y, "string", "seed")
#> # A tibble: 2 × 4
#>     idX         string   idY   seed
#>   <int>          <chr> <chr>  <chr>
#> 1     1     Motorcycle     b otorcy
#> 2     2 TractorTrailer     a ractor


feng <- function(x, y, by_x, pattern_y) 
 library(fuzzyjoin)

 by_string <- pattern_y
 names(by_string) <- by_x
 regex_inner_join(x, y, by = by_string)


feng(x, y, "string", "seed")
#> # A tibble: 2 × 4
#>     idX         string   idY   seed
#>   <int>          <chr> <chr>  <chr>
#> 1     1     Motorcycle     b otorcy
#> 2     2 TractorTrailer     a ractor

基准测试

library(microbenchmark)
res <- microbenchmark(
 joran(x, y, "string", "seed"),
 stephen(x, y, "string", "seed"),
 feng(x, y, "string", "seed"),
 partial_join(x, y, "string", "seed")
)
res
#> Unit: microseconds
#>                                  expr       min         lq       mean
#>         joran(x, y, "string", "seed") 18953.008 20099.0540 21641.6646
#>       stephen(x, y, "string", "seed")  1320.161  1456.9415  1704.9218
#>          feng(x, y, "string", "seed")  5187.366  5625.8825  6926.2336
#>  partial_join(x, y, "string", "seed")   190.264   222.0055   257.7906
#>      median        uq        max neval cld
#>  20675.5855 21827.764  70707.324   100   c
#>   1579.8925  1670.719   9676.176   100 a  
#>   5842.8150  6065.530 107961.805   100  b 
#>    242.0735   283.870    523.649   100 a

set.seed(123123)
x_large <- x %>% sample_n(1000, replace = T)
y_large <- y %>% sample_n(1000, replace = T)


res_large <- microbenchmark(
 joran(x_large, y_large, "string", "seed"),
 # stephen(x_large, y_large, "string", "seed"),
 feng(x_large, y_large, "string", "seed"),
 partial_join(x_large, y_large, "string", "seed")
)
res_large
#> Unit: milliseconds
#>                                              expr       min        lq     mean    median        uq      max neval cld
#>         joran(x_large, y_large, "string", "seed") 321.03631 324.49262 334.2760 329.13991 335.30185 368.1153    10   c
#>          feng(x_large, y_large, "string", "seed")  88.00369  89.85744 103.8686  93.84477  97.69121 200.0473    10 a  
#>  partial_join(x_large, y_large, "string", "seed") 286.01533 286.78024 290.6295 288.89405 291.79887 303.4524    10  b 

【讨论】:

第二个基准测试有错误;它在基准测试res_large 时使用原始(小)xy,这就是为什么时间与res 相同的原因。当我用x_largey_large 替换它时,它显示冯的解决方案(fuzzyjoin)快了大约5 倍。我怀疑这是因为fuzzyjoin 效率更高(尤其是在唯一值很少时),但在小型数据集上的开销更大 @DavidRobinson,感谢您指出!我已经更正了数字和帖子。【参考方案3】:

我不知道这将如何处理更大的数据,但它(或其变体)可能值得一试:

library(dplyr)

x <- data.frame(idX=1:3, string=c("Motorcycle", "TractorTrailer", "Sailboat"))
y <- data_frame(idY=letters[1:3], seed=c("ractor", "otorcy", "irplan"))

my_db <- src_sqlite(path = tempfile(),create= TRUE)
x_tbl <- copy_to(dest = my_db,df = x)
y_tbl <- copy_to(dest = my_db,df = y)

result <- tbl(my_db,sql("select * from x,y where x.string like '%' || y.seed || '%'"))
> collect(result)

Source: local data frame [2 x 4]

    idX         string   idY   seed
  (int)          (chr) (chr)  (chr)
1     1     Motorcycle     b otorcy
2     2 TractorTrailer     a ractor

我也无法说明它的性能在不同数据库中的差异。 postgres 或 mysql 在这种查询中可能更好或更差。

【讨论】:

【参考方案4】:

这行得通,但在大型数据集上会非常慢。

x <- data.frame(idX=1:3, string=c("Motorcycle", "TractorTrailer", "Sailboat"))
y <- data_frame(idY=letters[1:3], seed=c("ractor", "otorcy", "irplan"))

library(dplyr)
full_join(mutate(x, i=1), 
          mutate(y, i=1)) %>% 
  select(-i) %>% 
  filter(str_detect(string, seed))

【讨论】:

以上是关于dplyr:inner_join 与部分字符串匹配的主要内容,如果未能解决你的问题,请参考以下文章

dplyr inner_join 与字符列上的 NA

如何在 R 中 dplyr::inner_join 多个 tbls 或 data.frames

为啥 data.table 的 inner_join 行为不同?

R语言dplyr包进行dataframe的连接(inner_joinleft_joinright_joinfull_joinsemi_joinanti_join)操作实战

《实习日记》| 7月20日 R语言笔记——dplyr

按部分字符串匹配分组