Google云平台使用方法 | Hail | GWAS
Posted Digital-LI
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了Google云平台使用方法 | Hail | GWAS相关的知识,希望对你有一定的参考价值。
参考:
Hail - Tutorial windows也可以安装:Spark在Windows下的环境搭建
spark-2.2.0-bin-hadoop2.7 - Hail依赖的平台,并行处理
google cloud platform - 云平台
Broad‘s data cluster set-up tool
对Google cloud SDK的一个简单的wrap,方便操作。
cloudtools is a small collection of command line tools intended to make using Hail on clusters running in Google Cloud‘s Dataproc service simpler.
These tools are written in Python and mostly function as wrappers around the gcloud suite of command line tools included in the Google Cloud SDK.
Google cloud基本使用
安装gcloud
登录,[GCloud] 讓 gcloud 連到新的 Google 帳戶下的 Google Cloud Platform
只需15分钟,使用谷歌云平台运行Jupyter Notebook
基本操作:
创建项目
进入控制台,点击三点符号
创建和删除虚拟机
gcloud dataproc clusters delete name
上传和删除文件
gcloud datastore create-indexes index.yaml
在程序中读入和写出文件
f1 = hc.read("gs://somewhere")
目前只是单独的使用一个VM,如何想批量并行使用Google cloud的VM就必须要使用分布式管理系统,如spark等,hail就是集成了spark。
Hail的基本使用
This snippet starts a cluster named "testcluster" with the 1 master machine, 2 worker machines (the minimum/default), and 6 additional preemptible worker machines. Then, after the cluster is started (this can take a few minutes), a Hail script is submitted to the cluster "testcluster".
spark基本原理
1. 在本地运行wrapper,创建Google cloud虚拟机
cluster start testcluster --master-machine-type n1-highmem-8 --worker-machine-type n1-standard-8 --num-workers 8 --version devel --spark 2.2.0 --zone asia-east1-a
2. 启动notebook
cluster connect testcluster notebook
3. 本地提交脚本到Google cloud上
cluster submit testcluster myhailscript.py
4. 登录到Google cloud,安装必备软件
gcloud compute ssh testcluster-m --zone asia-east1-a
5. 安装sklearn
sudo su # to be root and install packages /opt/conda/bin/conda install scikit-learn
文章案例
把这篇文章搞懂80%,遗传和统计就基本入门了,操作性很强。
Depression is more frequently observed among individuals exposed to traumatic events. The relationship between trauma exposure and depression, including the role of genetic variation, is complex and poorly understood. The UK Biobank concurrently assessed depression and reported trauma exposure in 126,522 genotyped individuals of European ancestry. We compared the shared aetiology of depression and a range of phenotypes, contrasting individuals reporting trauma exposure with those who did not (final sample size range: 24,094- 92,957). Depression was heritable in participants reporting trauma exposure and in unexposed individuals, and the genetic correlation between the groups was substantial and not significantly different from 1. Genetic correlations between depression and psychiatric traits were strong regardless of reported trauma exposure, whereas genetic correlations between depression and body mass index (and related phenotypes) were observed only in trauma exposed individuals. The narrower range of genetic correlations in trauma unexposed depression and the lack of correlation with BMI echoes earlier ideas of endogenous depression.
Major depressive disorder (MDD) is a common illness accompanied by considerable morbidity, mortality, costs, and heightened risk of suicide. We conducted a genome-wide association meta-analysis based in 135,458 cases and 344,901 controls and identified 44 independent and significant loci. The genetic findings were associated with clinical features of major depression and implicated brain regions exhibiting anatomical differences in cases. Targets of antidepressant medications and genes involved in gene splicing were enriched for smaller association signal. We found important relationships of genetic risk for major depression with educational attainment, body mass, and schizophrenia: lower educational attainment and higher body mass were putatively causal, whereas major depression and schizophrenia reflected a partly shared biological etiology. All humans carry lesser or greater numbers of genetic risk factors for major depression. These findings help refine the basis of major depression and imply that a continuous measure of risk underlies the clinical phenotype.
一些问题
Hail是用来干嘛的?
案例:gnomAD
The Neale Lab at the Broad Institute used Hail to perform QC and genome-wide association analysis of 2419 phenotypes across 10 million variants and 337,000 samples from the UK Biobank in 24 hours. paper
Hail’s functionality is exposed through Python and backed by distributed algorithms built on top of Apache Spark to efficiently analyze gigabyte-scale data on a laptop or terabyte-scale data on a cluster.
- a library for analyzing structured tabular and matrix data
- a collection of primitives for operating on data in parallel
- a suite of functionality for processing genetic data
- not an acronym
# conda env create -n hail -f $HAIL_HOME/python/hail/environment.yml source activate hail cd $HAIL_HOME/tutorials jhail
运行GWAS
1kg_annotations.txt
Sample Population SuperPopulation isFemale PurpleHair CaffeineConsumption HG00096 GBR EUR False False 77.0 HG00097 GBR EUR True True 67.0 HG00098 GBR EUR False False 83.0 HG00099 GBR EUR True False 64.0 HG00100 GBR EUR True False 59.0 HG00101 GBR EUR False True 77.0
1kg.mt目录
. ├── _SUCCESS ├── cols │ ├── _SUCCESS │ ├── metadata.json.gz │ └── rows │ ├── metadata.json.gz │ └── parts │ └── part-0 ├── entries │ ├── _SUCCESS │ ├── metadata.json.gz │ └── rows │ ├── metadata.json.gz │ └── parts │ ├── part-00-2-0-0-6886f608-afb6-1e68-684b-3c5920e7edd5 │ ├── part-01-2-1-0-3d30160f-dba0-16f4-e898-4e7c30148855 │ ├── part-02-2-2-0-1051da4b-6799-6074-7d32-9bd7fa9ed9af ├── globals │ ├── _SUCCESS │ ├── globals │ │ ├── metadata.json.gz │ │ └── parts │ │ └── part-0 │ ├── metadata.json.gz │ └── rows │ ├── metadata.json.gz │ └── parts │ └── part-0 ├── metadata.json.gz ├── references └── rows ├── _SUCCESS ├── metadata.json.gz └── rows ├── metadata.json.gz └── parts ├── part-00-2-0-0-6886f608-afb6-1e68-684b-3c5920e7edd5 ├── part-01-2-1-0-3d30160f-dba0-16f4-e898-4e7c30148855 ├── part-02-2-2-0-1051da4b-6799-6074-7d32-9bd7fa9ed9af
问题:只需15分钟,使用谷歌云平台运行Jupyter Notebook
GWAS的原理
待续~
以上是关于Google云平台使用方法 | Hail | GWAS的主要内容,如果未能解决你的问题,请参考以下文章
来自 Google Tag Manager 的用户数据摄取流程,用于 Recommendation AI Google 云平台