每周从 user_id 和 date 查找新的活跃用户
Posted
技术标签:
【中文标题】每周从 user_id 和 date 查找新的活跃用户【英文标题】:Find new and active user each week from user_id and date 【发布时间】:2019-09-19 21:27:25 【问题描述】:背景
现在假设我们经营基于订阅的业务(我们确实这样做了)。当客户订阅我们的产品时,他们有许多定制选项。 出于本练习的目的,我们将假设以下内容:
● 当用户注册时,将在该用户的订单表中生成一条记录。
○ 这将是该 user_id 在 Orders 中的第一条记录
○ Orders 表中的第一个日期将是用户注册的日期。
● 用户的第一个订单在他们注册的同一天发货
● 用户可以随时更改配送频率,甚至可以要求配送额外的箱子。
○ 对于这个任务,我们不关心交付的频率;主要是因为这个例子中的数据是随机生成的,在这个数据集中观察到的频率节奏违背了自然逻辑;)
● 如果或当用户取消时,他们会在订单表中的最后一个订单后的 14 天(含)内保持“活跃”状态。 ● 从第一个订单到最后一个订单之间的所有日子里,用户都被认为是“活跃的”。
○ 对于这个任务,我们不会关心探索“重新激活”;即,与已取消然后在未来某个日期重新注册的用户一起使用。为简单起见,我们将这些用户视为从未取消过。
定义
● 将用户群组定义为在同一时期首次活跃的用户的集合。
● 将给定同类群组在一段时间内的留存率定义为比率:N / D,其中 N = 同类群组中本时段活跃且上一时段也活跃的用户数 D = 同类群组中的用户数上一期活跃的人
● 根据问题中指定的内容,将期间定义为从星期日开始的日历月或日历周。
问题
生成包含列的表:
日期 |新计数 | count_active
count_new:每周有多少新用户注册?
count_active:每周有多少活跃用户?
部分数据:
id user_id total date payment_status
1 1 1 12783 2017-01-01 paid
2 258 1 12783 2017-01-22 paid
3 1072 1 12783 2017-02-26 paid
4 2086 1 12783 2017-03-26 paid
5 2387 1 12783 2017-04-02 paid
6 3860 1 12783 2017-04-30 paid
7 5546 1 12783 2017-05-28 paid
8 2 2 9516 2017-01-01 paid
9 68 2 9516 2017-01-08 paid
10 3 3 14536 2017-01-01 paid
11 372 3 14536 2017-01-29 paid
12 879 3 14536 2017-02-19 paid
13 1796 3 14536 2017-03-19 paid
14 3451 3 14536 2017-04-23 paid
15 4651 3 14536 2017-05-14 paid
16 5547 3 14536 2017-05-28 paid
17 6920 3 14536 2017-06-18 paid
18 7385 3 14536 2017-06-25 paid
19 10024 3 14536 2017-07-30 unpaid
20 11581 3 14536 2017-07-30 unpaid
21 13138 3 14536 2017-07-30 unpaid
22 14695 3 14536 2017-07-30 unpaid
23 4 4 5755 2017-01-01 paid
24 497 4 5755 2017-02-05 paid
25 1285 4 5755 2017-03-05 paid
26 2699 4 5755 2017-04-09 paid
27 3057 4 5755 2017-04-16 paid
28 5 5 10102 2017-01-01 paid
29 498 5 10102 2017-02-05 paid
30 1529 5 10102 2017-03-12 paid
31 2087 5 10102 2017-03-26 paid
32 2388 5 10102 2017-04-02 paid
33 6 6 13552 2017-01-01 paid
34 69 6 13552 2017-01-08 paid
structure(list(id = 1:100, user_id = c(1L, 2L, 3L, 4L, 5L, 6L,
7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L,
20L, 21L, 22L, 23L, 24L, 25L, 26L, 27L, 28L, 29L, 30L, 31L, 32L,
33L, 34L, 35L, 36L, 37L, 38L, 39L, 40L, 41L, 42L, 43L, 44L, 45L,
46L, 47L, 48L, 49L, 50L, 51L, 52L, 53L, 54L, 55L, 56L, 57L, 58L,
59L, 60L, 61L, 62L, 63L, 64L, 65L, 66L, 67L, 2L, 6L, 10L, 12L,
17L, 21L, 27L, 29L, 36L, 37L, 40L, 49L, 55L, 59L, 61L, 67L, 68L,
69L, 70L, 71L, 72L, 73L, 74L, 75L, 76L, 77L, 78L, 79L, 80L, 81L,
82L, 83L, 84L), total = c(12783L, 9516L, 14536L, 5755L, 10102L,
13552L, 6940L, 12154L, 14639L, 8034L, 10912L, 12255L, 8016L,
6483L, 9841L, 14813L, 10934L, 5194L, 7753L, 5544L, 13813L, 9739L,
13630L, 5281L, 10607L, 14873L, 13441L, 12998L, 10162L, 8110L,
8269L, 9118L, 12308L, 14144L, 5789L, 7364L, 11921L, 5276L, 11695L,
6669L, 7872L, 12890L, 7636L, 11682L, 14620L, 10876L, 12273L,
14560L, 6787L, 13150L, 5559L, 13086L, 6957L, 6862L, 12442L, 10948L,
12293L, 8398L, 8796L, 14986L, 6235L, 12077L, 5013L, 11953L, 7891L,
13551L, 14988L, 9516L, 13552L, 8034L, 12255L, 10934L, 13813L,
13441L, 10162L, 7364L, 11921L, 6669L, 6787L, 12442L, 8796L, 6235L,
14988L, 10769L, 10875L, 10603L, 12522L, 5475L, 9343L, 6860L,
11969L, 7392L, 9487L, 13016L, 6284L, 9801L, 6581L, 9164L, 11898L,
9210L), date = structure(c(17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167,
17167, 17167, 17167, 17167, 17167, 17167, 17167, 17167, 17174,
17174, 17174, 17174, 17174, 17174, 17174, 17174, 17174, 17174,
17174, 17174, 17174, 17174, 17174, 17174, 17174, 17174, 17174,
17174, 17174, 17174, 17174, 17174, 17174, 17174, 17174, 17174,
17174, 17174, 17174, 17174, 17174), class = "Date"), payment_status = c("paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid", "paid", "paid", "paid", "paid", "paid",
"paid", "paid", "paid")), row.names = c(NA, 100L), class = "data.frame")
【问题讨论】:
欢迎来到 SO。您需要分享一个可重现的示例。使用dput
分享数据
我已经添加了
【参考方案1】:
所以我设法通过检查 user_id 的第一次出现来计算 count_new 然后与初始数据合并,添加一个列,按日期和 id 告诉用户是否是新用户,然后我按日期计算新用户:
library(dplyr)
firstshow<-Orders %>%
group_by(user_id) %>%
arrange(date) %>%
slice(1L) %>%
mutate(new = "new")
newdata<-merge.data.frame(Orders,firstshow,by=c("date","user_id"),all = T)
count<-newdata %>%
filter(new=="new" ) %>%
group_by(date) %>%
tally()
names(count)[2]<-"count_new"
【讨论】:
以上是关于每周从 user_id 和 date 查找新的活跃用户的主要内容,如果未能解决你的问题,请参考以下文章