Hail uses almost all of CPUs

Dear Hail team,

I am using hail 0.2 locally on my institute’s single multi CPU workstation (48 CPUs).
Now I am doing WES analysis which contain around 4000 subjects and 3,000,000 variants.
However, while I am doing the analysis, hail uses more than 40 CPUs (e.g. when I used “count()” command ).
Is there any way to limit hail’s CPU use? (e.g. 30 CPUs)

Since I am new to hail, any advice would be greatly appreciated.

If you’re running on a single machine, you can configure Spark’s local utilization by setting the master setting in hl.init:

hl.init(master='local[30]')

The default is local[*], which uses everything.

Hello Tim,

Thank you very much your superfast reply!
And it worked well.
I appreciate it.