Hello Hail team,
I’m trying to get the pruned variant table using ld_prune()
but I get a Container killed on request. Exit code is 137
every time I try to run it. I’m using a matrix table of 1.1Tb (WGS of 1,300 individuals) and I’m working on a EMR Cluster on AWS with EC2 instances with 64Gb of memory and 256Gb of storage.
Am I doing something wrong? Is there estimative of how much memory do I need for this?
Thank you for the support!
Code
import hail as hl
hl.init()
sabe_mt = hl.read_matrix_table('s3://file.mt/')
sabe_mt = sabe_mt.filter_rows(hl.len(sabe_mt.alleles) == 2)
pruned_variant_table = hl.ld_prune(sabe_mt.GT, r2=0.1, bp_window_size=50000)
Spark Config (Memory)
spark.driver.memory 52131M
spark.executor.memory 51316M
Hail log (tail -1000
)
ldprune_error.log (154.0 KB)
Python prompt
hail_prompt.txt (10.0 KB)
Thanks for reaching out. One clarification question, how much memory per core is this?
Hello Chris,
I’m not sure how I can get that info, any tips?
Best,
Rodrigo
Hey @rodrigo.barreiro !
I think Chris is asking what kind of EC2 instances are you using
Oh, I see. I’m using only m4.4xlarge (1 master, 2 core, 0~10 task nodes) instances. They have 16vCPU each and 64Gb of memory.
Okay. Thanks for the info, to answer my question, 4 GB per core. LD prune in hail is a very memory intensive operation. On google cloud, our users typically use high memory machines, which have at least 6.5 GB per core.
I have two recommendations:
- Increase the
spark.executor.memory
property to at least 90% of available memory on the machine. For the m4.4xlarge instances that you were using, this would be around 59000M
- However, you should also use more memory optimized instances. The instances that closest match what our users on GCP use for LD prune are the r5d.2xlarge with 8 cores 64GiB memory, and a 300GiB SSD.
I hope this helps.
1 Like
Thanks for the insights, @chrisvittal !
I’ll give it a try. Do the task nodes have to be like this too?
The task nodes should be high memory as well. You may be able to run regular r5 instances rather than r5d ones.
1 Like
@chrisvittal, it worked! Thank you.
One point, when my cluster called more task nodes the task progress increment didn’t elevate, but it increased when I added more core nodes.
E.g (setup: progress bar):
2 core (16vCPUs) + 8 Task nodes: Stage X:==> (x + 32) /1000]
2 core (16vCPUs) + 16 Task nodes: Stage X:==> (x + 32) /1000]
4 core (16vCPUs) + 8 Task nodes: Stage X:==> (x + 64) /1000]
Are the task nodes doing anything in this case? Is there any recommended core/task ratio for this analysis?
Thank you again,
Rodrigo Barreiro
We recommend a 3:1 partition to core ratio.
I am as surprised as you are that doubling the number of workers did not. double the number of active partitions. I would ask Amazon support for help.