Hail commands failing to run in Terra workspace spark cluster

Hi,
I am working in Terra workspace, and using hail for my analyses. My input files happen to be .gprobs and .sample files (oxford format) for 11 different chromosomes. I have tried to use import_gen to convert these files to hail matrix table, but import_gen just ran forever and was unsuccessful. I then tried creating plink format files (bim, bed, and fam), with the goal of using import_plink to create hail matrix tables. It appears to work, but when I run hl.filter_intervals on the hail matrix table, it gives this error: TypeError: The point type is incompatible with key type of the dataset (‘dtype(‘locus’)’, ‘dtype(‘struct{locus: locus}’)’). And when I simply ask for a count, it runs forever with no output. When I look at the jupyter log, I get this error: YarnAllocatorNodeHealthTracker: WARN: No available nodes reported, please check Resource Manager.
I have increased resources up to CPU (16), Workers (120), Preemptibles (100), Memory (26), Disk size (500), but the errors continue.
Please, assist me troubleshoot why hail is not working.
I have looked through the hail forum, but can’t find a solution.
I am wondering if it is a data size issue because I have successfully run these processing steps using hail with this cluster on smaller files (genotype arrays) and now I am having to run the process on imputation files which are much larger.
Thank you.

Oyomoare