如何通过 Firebase 中的某些用户事件过滤 BigQuery 中的保留计算
Posted
技术标签:
【中文标题】如何通过 Firebase 中的某些用户事件过滤 BigQuery 中的保留计算【英文标题】:How to filter retention calculations in BigQuery by certain user events from Firebase 【发布时间】:2019-03-11 08:36:05 【问题描述】:我已根据此处共享的查询进行了查询:https://github.com/sagishporer/big-query-queries-for-firebase/wiki/Query:-Daily-retention 使用从 Firebase 流式传输的数据计算 BigQuery 中的用户留存率。
到目前为止它一直在工作,但是随着数据集变大,由于以下错误,它不再能够运行它:
查询执行期间资源超出:无法在分配的内存中执行查询。峰值使用量:限制的 129%。最高内存消耗者:用于分析 OVER() 子句的排序操作:100%
查询如下:
SELECT
install_date,
SUM(CASE
WHEN days_since_install = 0 THEN users
ELSE 0 END) AS day_0,
SUM(CASE
WHEN days_since_install = 1 THEN users
ELSE 0 END) AS day_1,
SUM(CASE
WHEN days_since_install = 2 THEN users
ELSE 0 END) AS day_2,
SUM(CASE
WHEN days_since_install = 3 THEN users
ELSE 0 END) AS day_3,
SUM(CASE
WHEN days_since_install = 4 THEN users
ELSE 0 END) AS day_4,
SUM(CASE
WHEN days_since_install = 5 THEN users
ELSE 0 END) AS day_5,
SUM(CASE
WHEN days_since_install = 6 THEN users
ELSE 0 END) AS day_6,
SUM(CASE
WHEN days_since_install = 7 THEN users
ELSE 0 END) AS day_7,
SUM(CASE
WHEN days_since_install = 8 THEN users
ELSE 0 END) AS day_8,
SUM(CASE
WHEN days_since_install = 9 THEN users
ELSE 0 END) AS day_9,
SUM(CASE
WHEN days_since_install = 10 THEN users
ELSE 0 END) AS day_10,
SUM(CASE
WHEN days_since_install = 11 THEN users
ELSE 0 END) AS day_11,
SUM(CASE
WHEN days_since_install = 12 THEN users
ELSE 0 END) AS day_12,
SUM(CASE
WHEN days_since_install = 13 THEN users
ELSE 0 END) AS day_13,
SUM(CASE
WHEN days_since_install = 14 THEN users
ELSE 0 END) AS day_14,
SUM(CASE
WHEN days_since_install = 15 THEN users
ELSE 0 END) AS day_15,
SUM(CASE
WHEN days_since_install = 16 THEN users
ELSE 0 END) AS day_16,
SUM(CASE
WHEN days_since_install = 17 THEN users
ELSE 0 END) AS day_17,
SUM(CASE
WHEN days_since_install = 18 THEN users
ELSE 0 END) AS day_18,
SUM(CASE
WHEN days_since_install = 19 THEN users
ELSE 0 END) AS day_19,
SUM(CASE
WHEN days_since_install = 20 THEN users
ELSE 0 END) AS day_20,
SUM(CASE
WHEN days_since_install = 21 THEN users
ELSE 0 END) AS day_21,
SUM(CASE
WHEN days_since_install = 22 THEN users
ELSE 0 END) AS day_22,
SUM(CASE
WHEN days_since_install = 23 THEN users
ELSE 0 END) AS day_23,
SUM(CASE
WHEN days_since_install = 24 THEN users
ELSE 0 END) AS day_24,
SUM(CASE
WHEN days_since_install = 25 THEN users
ELSE 0 END) AS day_25,
SUM(CASE
WHEN days_since_install = 26 THEN users
ELSE 0 END) AS day_26,
SUM(CASE
WHEN days_since_install = 27 THEN users
ELSE 0 END) AS day_27,
SUM(CASE
WHEN days_since_install = 28 THEN users
ELSE 0 END) AS day_28,
SUM(CASE
WHEN days_since_install = 29 THEN users
ELSE 0 END) AS day_29,
SUM(CASE
WHEN days_since_install = 30 THEN users
ELSE 0 END) AS day_30
FROM (
SELECT
DATE(TIMESTAMP_MICROS(user_first_touch_timestamp)) AS install_date,
DATE(TIMESTAMP_MICROS(event_timestamp)) AS event_realdate,
DATE_DIFF(DATE(TIMESTAMP_MICROS(event_timestamp)), DATE(TIMESTAMP_MICROS(user_first_touch_timestamp)), day) AS days_since_install,
COUNT(DISTINCT user_pseudo_id) AS users
FROM
`dataset.events_2019*`
WHERE
event_name = 'user_engagement'
AND user_pseudo_id NOT IN (
SELECT
user_pseudo_id
FROM (
SELECT
MIN(global_session_id),
user_pseudo_id,
user_first_touch_timestamp,
event_timestamp
FROM (
SELECT
*,
IF (previous_event='some_event'
AND LAG(global_session_id,1)OVER (ORDER BY global_session_id, event_name)=global_session_id,
LAG(global_session_id,1) OVER (ORDER BY global_session_id, event_name),
NULL) AS match
FROM (
SELECT
*,
LAG(event_name,1) OVER (ORDER BY global_session_id, event_name) AS previous_event
FROM (
SELECT
global_session_id,
event_name,
user_first_touch_timestamp,
event_timestamp,
user_pseudo_id
FROM (
SELECT
global_session_id,
event_name,
user_pseudo_id,
event_timestamp,
user_first_touch_timestamp,
IF (some_kill=1,
global_session_id,
NULL) AS session_some_kill,
IF (event_name='user_engagement',
global_session_id,
NULL) AS session
FROM (
SELECT
*,
CASE
WHEN event_params.key = 'Kills' AND event_params.value.int_value>0 THEN 1
ELSE 0
END AS some_kill,
SUM(is_new_session) OVER (ORDER BY user_pseudo_id, event_timestamp, event_name) AS global_session_id,
SUM(is_new_session) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp) AS user_session_id
FROM (
SELECT
*,
CASE
WHEN event_timestamp - last_event >= (30 * 60 * 1000) OR last_event IS NULL THEN 1
ELSE 0
END AS is_new_session
FROM (
SELECT
user_pseudo_id,
event_timestamp,
event_name,
event_params,
user_first_touch_timestamp,
LAG(event_timestamp,1) OVER (PARTITION BY user_pseudo_id ORDER BY event_timestamp) AS last_event
FROM (
SELECT
user_pseudo_id,
event_timestamp,
event_name,
event_params,
user_first_touch_timestamp
FROM `dataset.events_2019*`,
UNNEST (event_params) AS event_params)
) last
) agg
)
)
WHERE
session_some_kill IS NOT NULL
OR session IS NOT NULL
GROUP BY
global_session_id,
event_name,
user_first_touch_timestamp,
event_timestamp,
user_pseudo_id
ORDER BY
global_session_id ) ) )
WHERE
match IS NOT NULL
AND event_timestamp-user_first_touch_timestamp<1.8e+9
GROUP BY
user_pseudo_id,
user_first_touch_timestamp,
event_timestamp))
GROUP BY
install_date,
event_realdate,
days_since_install )
GROUP BY
install_date
HAVING
day_0 > 0 /* Remove older dates - not enough data, you should also ignore the first record for partial data */
ORDER BY
install_date
【问题讨论】:
如果您从内部查询中删除 Order by 会发生什么,因为 order by 可能是一个扩展的操作。只留下最后一步。此外,Unnest 确实可以增加记录的大小,您可以将您的位置移近它以减少记录大小吗? 【参考方案1】:尝试以下方法:
在您的 UNNEST
子句中添加 WHERE
以减少影响性能的返回记录的大小,例如:
SELECT
user_pseudo_id,
event_timestamp,
event_name,
event_params,
user_first_touch_timestamp
FROM `analytics_185672896.events_2019*`,
UNNEST (event_params) AS event_params)
event_name = 'user_engagement
删除内部 SQL 中的 ORDER BY
以避免在不需要的地方进行额外计算,因为 BQ 需要在执行计划中移动到下一步之前获取所有结果和 ORDER,有关更多信息,请参阅此 link
【讨论】:
@maja 你能告诉我这是否有助于解决你的问题 感谢@maja 的反馈,您能否也投票给答案,这在 SO 中也很重要,供其他人查看【参考方案2】:我们在 2 遍中做这种事情。首先,我们计算每个用户活跃的天数,然后进行我们需要的任何计算。
我们为每个用户使用这样的东西来存储活跃天数:
SELECT
user_id as userId,
BIT_OR(1 << GREATEST(0, (DIV(event_timestamp, (24 * 60 * 60 * 1000000)) - DIV(user_first_touch_timestamp,(24 * 60 * 60 * 1000000)) ))) as DX,
活动天数存储为一个位域,每个 INT64 最多 64 天,而不是单独计算和存储每个单独的天。 您可以根据需要添加更多天数,通过抵消班次,每个 INT64 增加 64 天。 查询运行和导出速度非常快。
这是UTC,您可以根据需要转换为当地时间。
我们只需要 GREATEST,因为我们按 user_id 分组并使用帐户链接,当用户卸载、重新安装和链接时,用户会获得另一个比旧事件更新的 first_touch_timestamp。
希望对你有帮助,
【讨论】:
以上是关于如何通过 Firebase 中的某些用户事件过滤 BigQuery 中的保留计算的主要内容,如果未能解决你的问题,请参考以下文章
如何从 BigQuery 中的 Firebase 事件中获取用户表?
如何使用 swift 有条件地向 Firebase Firestore 中的查询添加另一个过滤器?
如何提取firebase中的所有孩子,其中某些部分等于某个字符串