Heap out of memory

Hi there,

I’m using the hl.agg.linreg() option to run a weighted GWAS. Everything went well when I only included 10 covariates, but when I raised the number of covariates to 20, the program crashed and returned an OutOfMemoryError. Here’s the error log:

Java stack trace:
java.lang.OutOfMemoryError: Java heap space
at java.util.Arrays.copyOf(Arrays.java:3236)
at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
at is.hail.io.StreamBlockOutputBuffer.writeBlock(OutputBuffers.scala:293)
at is.hail.io.LZ4OutputBlockBuffer.writeBlock(OutputBuffers.scala:313)
at is.hail.io.BlockingOutputBuffer.writeBlock(OutputBuffers.scala:190)
at is.hail.io.BlockingOutputBuffer.writeDouble(OutputBuffers.scala:234)
at is.hail.io.LEB128OutputBuffer.writeDouble(OutputBuffers.scala:170)
at __C402etypeEncode.__m408ENCODE_r_float64_TO_r_float64(Unknown Source)
at __C402etypeEncode.__m407write_fields_group_0(Unknown Source)
at __C402etypeEncode.__m406ENCODE_r_tuple_of_r_float64ANDo_tuple_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_tuple_of_r_float64ENDEND_TO_r_struct_of_r_float64ANDo_struct_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_struct_of_r_float64ENDEND(Unknown Source)
at __C402etypeEncode.__m405ENCODE_r_array_of_r_tuple_of_r_float64ANDo_tuple_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_tuple_of_r_float64ENDEND_TO_r_array_of_r_struct_of_r_float64ANDo_struct_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_struct_of_r_float64ENDEND(Unknown Source)
at __C402etypeEncode.__m404write_fields_group_0(Unknown Source)
at __C402etypeEncode.__m403ENCODE_r_tuple_of_r_array_of_r_tuple_of_r_float64ANDo_tuple_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_tuple_of_r_float64ENDENDEND_TO_r_struct_of_r_array_of_r_struct_of_r_float64ANDo_struct_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_struct_of_r_float64ENDENDEND(Unknown Source)
at __C402etypeEncode.apply(Unknown Source)
at is.hail.io.CompiledEncoder.writeRegionValue(Encoder.scala:32)
at is.hail.annotations.BroadcastRegionValue$class.broadcast(BroadcastValue.scala:55)
at is.hail.annotations.BroadcastRow.broadcast$lzycompute(BroadcastValue.scala:75)
at is.hail.annotations.BroadcastRow.broadcast(BroadcastValue.scala:75)
at is.hail.expr.ir.TableMapRows.execute(TableIR.scala:1450)
at is.hail.expr.ir.TableMapGlobals.execute(TableIR.scala:1737)
at is.hail.expr.ir.Interpret$.run(Interpret.scala:811)
at is.hail.expr.ir.Interpret$.alreadyLowered(Interpret.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.interpretAndCoerce$1(InterpretNonCompilable.scala:16)
at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.apply(InterpretNonCompilable.scala:58)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.transform(LoweringPass.scala:56)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:15)
Hail version: 0.2.49-11ae8408bad0
Error summary: OutOfMemoryError: Java heap space

May I ask if there’s anything I could do in this case? Thank you very much!

what’s the pipeline and what’s the full stack trace?

Pipeline:

#!/usr/bin/python
import hail as hl
hl.init()
mt = hl.import_plink(bed=‘chr22_EA_40_40.bed’,bim=‘chr22_EA_40_40.bim’,fam=‘chr22_EA_40_40.fam’,quant_pheno=True)
covar = (hl.import_table(‘EA_covar.txt’,types={‘IID’:hl.tstr},impute=True).key_by(‘IID’))
mt = mt.annotate_cols(covar=covar[mt.s])
weight = (hl.import_table(‘EA_weight.txt’,types={‘IID’:hl.tstr},impute=True).key_by(‘IID’))
mt = mt.annotate_cols(weight=weight[mt.s])
mt = mt.annotate_rows(gwas=hl.agg.linreg(mt.quant_pheno,
[1,mt.covar.isMale,mt.covar.YEAR,mt.covar.isAxiom,
mt.covar.PC1,mt.covar.PC2,mt.covar.PC3,mt.covar.PC4,
mt.covar.PC5,mt.covar.PC6,mt.covar.PC7,mt.covar.PC8,
mt.covar.PC9,mt.covar.PC10,mt.covar.PC11,mt.covar.PC12,
mt.covar.PC13,mt.covar.PC14,mt.covar.PC15,mt.covar.PC16,
mt.covar.PC17,mt.covar.PC18,mt.covar.PC19,mt.covar.PC20,
mt.GT.n_alt_alleles()],weight=mt.weight.weight))
mt.gwas.export(‘chr22_EA_40_40_hail.txt’)

Full error log (including stack trace):

2020-07-16 15:45:59 Hail: INFO: Found 372999 samples in fam file.
2020-07-16 15:45:59 Hail: INFO: Found 2290 variants in bim file.
2020-07-16 15:45:59 Hail: INFO: Reading table to impute column types
[Stage 0:> (0 + 1) / 1]2020-07-16 15:46:05 Hail: INFO: Finished type imputation
Loading column ‘IID’ as type ‘str’ (user-specified)
Loading column ‘isMale’ as type ‘bool’ (imputed)
Loading column ‘YEAR’ as type ‘int32’ (imputed)
Loading column ‘isAxiom’ as type ‘bool’ (imputed)
Loading column ‘PC1’ as type ‘float64’ (imputed)
Loading column ‘PC2’ as type ‘float64’ (imputed)
Loading column ‘PC3’ as type ‘float64’ (imputed)
Loading column ‘PC4’ as type ‘float64’ (imputed)
Loading column ‘PC5’ as type ‘float64’ (imputed)
Loading column ‘PC6’ as type ‘float64’ (imputed)
Loading column ‘PC7’ as type ‘float64’ (imputed)
Loading column ‘PC8’ as type ‘float64’ (imputed)
Loading column ‘PC9’ as type ‘float64’ (imputed)
Loading column ‘PC10’ as type ‘float64’ (imputed)
Loading column ‘PC11’ as type ‘float64’ (imputed)
Loading column ‘PC12’ as type ‘float64’ (imputed)
Loading column ‘PC13’ as type ‘float64’ (imputed)
Loading column ‘PC14’ as type ‘float64’ (imputed)
Loading column ‘PC15’ as type ‘float64’ (imputed)
Loading column ‘PC16’ as type ‘float64’ (imputed)
Loading column ‘PC17’ as type ‘float64’ (imputed)
Loading column ‘PC18’ as type ‘float64’ (imputed)
Loading column ‘PC19’ as type ‘float64’ (imputed)
Loading column ‘PC20’ as type ‘float64’ (imputed)
2020-07-16 15:46:05 Hail: INFO: Reading table to impute column types
2020-07-16 15:46:05 Hail: INFO: Finished type imputation
Loading column ‘IID’ as type ‘str’ (user-specified)
Loading column ‘weight’ as type ‘float64’ (imputed)
[Stage 3:> (0 + 1) / 1]Traceback (most recent call last):
File “test.py”, line 17, in
mt.gwas.export(‘chr22_EA_40_40_hail.txt’)
File “”, line 2, in export
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/typecheck/check.py”, line 614, in wrapper
return original_func(*args, **kwargs)
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/expr/expressions/base_expression.py”, line 944, in export
ds.export(output=path, delimiter=delimiter, header=header)
File “”, line 2, in export
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/typecheck/check.py”, line 614, in wrapper
return original_func(*args, **kwargs)
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/table.py”, line 1038, in export
ir.TableWrite(self._tir, ir.TableTextWriter(output, types_file, header, parallel, delimiter)))
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/backend/spark_backend.py”, line 296, in execute
result = json.loads(self._jhc.backend().executeJSON(jir))
File “/ua/xzhong35/.local/lib/python3.6/site-packages/py4j/java_gateway.py”, line 1257, in call
answer, self.gateway_client, self.target_id, self.name)
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/backend/spark_backend.py”, line 41, in deco
‘Error summary: %s’ % (deepest, full, hail.version, deepest)) from None
hail.utils.java.FatalError: OutOfMemoryError: Java heap space

Java stack trace:

java.lang.OutOfMemoryError: Java heap space

at java.util.Arrays.copyOf(Arrays.java:3236)

at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)

at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)

at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)

at is.hail.io.StreamBlockOutputBuffer.writeBlock(OutputBuffers.scala:293)

at is.hail.io.LZ4OutputBlockBuffer.writeBlock(OutputBuffers.scala:313)

at is.hail.io.BlockingOutputBuffer.writeBlock(OutputBuffers.scala:190)

at is.hail.io.BlockingOutputBuffer.writeDouble(OutputBuffers.scala:234)

at is.hail.io.LEB128OutputBuffer.writeDouble(OutputBuffers.scala:170)

at __C402etypeEncode.__m408ENCODE_r_float64_TO_r_float64(Unknown Source)

at __C402etypeEncode.__m407write_fields_group_0(Unknown Source)

at __C402etypeEncode.__m406ENCODE_r_tuple_of_r_float64ANDo_tuple_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_tuple_of_r_float64ENDEND_TO_r_struct_of_r_float64ANDo_struct_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_struct_of_r_float64ENDEND(Unknown Source)

at __C402etypeEncode.__m405ENCODE_r_array_of_r_tuple_of_r_float64ANDo_tuple_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_tuple_of_r_float64ENDEND_TO_r_array_of_r_struct_of_r_float64ANDo_struct_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_struct_of_r_float64ENDEND(Unknown Source)

at __C402etypeEncode.__m404write_fields_group_0(Unknown Source)

at __C402etypeEncode.__m403ENCODE_r_tuple_of_r_array_of_r_tuple_of_r_float64ANDo_tuple_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_tuple_of_r_float64ENDENDEND_TO_r_struct_of_r_array_of_r_struct_of_r_float64ANDo_struct_of_r_boolANDr_int32ANDr_boolANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ANDr_float64ENDANDo_struct_of_r_float64ENDENDEND(Unknown Source)

at __C402etypeEncode.apply(Unknown Source)

at is.hail.io.CompiledEncoder.writeRegionValue(Encoder.scala:32)

at is.hail.annotations.BroadcastRegionValue$class.broadcast(BroadcastValue.scala:55)

at is.hail.annotations.BroadcastRow.broadcast$lzycompute(BroadcastValue.scala:75)

at is.hail.annotations.BroadcastRow.broadcast(BroadcastValue.scala:75)

at is.hail.expr.ir.TableMapRows.execute(TableIR.scala:1450)

at is.hail.expr.ir.TableMapGlobals.execute(TableIR.scala:1737)

at is.hail.expr.ir.Interpret$.run(Interpret.scala:811)

at is.hail.expr.ir.Interpret$.alreadyLowered(Interpret.scala:53)

at is.hail.expr.ir.InterpretNonCompilable$.interpretAndCoerce$1(InterpretNonCompilable.scala:16)

at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:53)

at is.hail.expr.ir.InterpretNonCompilable$.apply(InterpretNonCompilable.scala:58)

at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.transform(LoweringPass.scala:56)

at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)

at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)

at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)

at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:15)

Hail version: 0.2.49-11ae8408bad0

Error summary: OutOfMemoryError: Java heap space

Please let me know if you need any further information, thanks!

A few questions -

  1. what runtime are you using? A Spark cluster, or running locally? Spark allocates a very small amount of memory by default in local mode.

  2. What are the dimensions of the PLINK file you’re importing? Number of variants, number of samples?

  1. Yes I’m running on a Spark cluster.

  2. There are 372999 samples, and the total number of variants is 7000000. However, I’ve cutted the chromosomes into pieces before running GWAS, and the largest piece contains ~15000 variants.

Can we see the full pipeline? May need to look at the hail log file too if that doesn’t illuminate things.

Do you mean the pipeline I used to cut chromosomes into pieces? I did that with R and PLINK, not hail.

oops, sorry, you posted it above.

I think the problem may be related to the scalability of Hail’s import_plink function. Could you try running PLINK to create a VCF from those input files, and importing that?

I tried to import VCF files instead and run GWAS. However, similar error occured:

2020-07-17 15:21:17 Hail: INFO: Reading table to impute column types
[Stage 0:> (0 + 1) / 1]2020-07-17 15:21:26 Hail: INFO: Finished type imputation
Loading column ‘IID’ as type ‘str’ (user-specified)
Loading column ‘isMale’ as type ‘bool’ (imputed)
Loading column ‘YEAR’ as type ‘int32’ (imputed)
Loading column ‘isAxiom’ as type ‘bool’ (imputed)
Loading column ‘PC1’ as type ‘float64’ (imputed)
Loading column ‘PC2’ as type ‘float64’ (imputed)
Loading column ‘PC3’ as type ‘float64’ (imputed)
Loading column ‘PC4’ as type ‘float64’ (imputed)
Loading column ‘PC5’ as type ‘float64’ (imputed)
Loading column ‘PC6’ as type ‘float64’ (imputed)
Loading column ‘PC7’ as type ‘float64’ (imputed)
Loading column ‘PC8’ as type ‘float64’ (imputed)
Loading column ‘PC9’ as type ‘float64’ (imputed)
Loading column ‘PC10’ as type ‘float64’ (imputed)
Loading column ‘PC11’ as type ‘float64’ (imputed)
Loading column ‘PC12’ as type ‘float64’ (imputed)
Loading column ‘PC13’ as type ‘float64’ (imputed)
Loading column ‘PC14’ as type ‘float64’ (imputed)
Loading column ‘PC15’ as type ‘float64’ (imputed)
Loading column ‘PC16’ as type ‘float64’ (imputed)
Loading column ‘PC17’ as type ‘float64’ (imputed)
Loading column ‘PC18’ as type ‘float64’ (imputed)
Loading column ‘PC19’ as type ‘float64’ (imputed)
Loading column ‘PC20’ as type ‘float64’ (imputed)
2020-07-17 15:21:26 Hail: INFO: Reading table to impute column types
[Stage 1:> (0 + 1) / 1]2020-07-17 15:21:27 Hail: INFO: Finished type imputation
Loading column ‘IID’ as type ‘str’ (user-specified)
Loading column ‘weight’ as type ‘float64’ (imputed)
2020-07-17 15:21:27 Hail: INFO: Reading table to impute column types
2020-07-17 15:21:27 Hail: INFO: Finished type imputation
Loading column ‘IID’ as type ‘str’ (user-specified)
Loading column ‘EA’ as type ‘int32’ (imputed)
[Stage 6:========================> (11 + 14) / 25]2020-07-17 15:23:27 Hail: INFO: Coerced sorted dataset
[Stage 7:> (0 + 24) / 25]Traceback (most recent call last):
File “test2.py”, line 19, in
mt.gwas.export(‘chr22_1_40_hail.txt’)
File “<decorator-gen-577>”, line 2, in export
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/typecheck/check.py”, line 614, in wrapper
return original_func(*args, **kwargs)
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/expr/expressions/base_expression.py”, line 944, in export
ds.export(output=path, delimiter=delimiter, header=header)
File “”, line 2, in export
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/typecheck/check.py”, line 614, in wrapper
return original_func(*args, **kwargs)
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/table.py”, line 1038, in export
ir.TableWrite(self._tir, ir.TableTextWriter(output, types_file, header, parallel, delimiter)))
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/backend/spark_backend.py”, line 296, in execute
result = json.loads(self._jhc.backend().executeJSON(jir))
File “/ua/xzhong35/.local/lib/python3.6/site-packages/py4j/java_gateway.py”, line 1257, in call
answer, self.gateway_client, self.target_id, self.name)
File “/ua/xzhong35/.local/lib/python3.6/site-packages/hail/backend/spark_backend.py”, line 41, in deco
‘Error summary: %s’ % (deepest, full, hail.version, deepest)) from None
hail.utils.java.FatalError: OutOfMemoryError: Java heap space

Java stack trace:
org.apache.spark.SparkException: Job aborted.
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:100)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1499)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
at is.hail.utils.richUtils.RichRDD$.writeTable$extension(RichRDD.scala:78)
at is.hail.expr.ir.TableValue.export(TableValue.scala:98)
at is.hail.expr.ir.TableTextWriter.apply(TableWriter.scala:337)
at is.hail.expr.ir.Interpret$.run(Interpret.scala:811)
at is.hail.expr.ir.Interpret$.alreadyLowered(Interpret.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.interpretAndCoerce$1(InterpretNonCompilable.scala:16)
at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.apply(InterpretNonCompilable.scala:58)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.transform(LoweringPass.scala:56)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:13)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$class.apply(LoweringPass.scala:13)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.apply(LoweringPass.scala:51)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:14)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:12)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:12)
at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:28)
at is.hail.backend.spark.SparkBackend.is$hail$backend$spark$SparkBackend$$_execute(SparkBackend.scala:318)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:305)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:304)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:20)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:18)
at is.hail.utils.package$.using(package.scala:602)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:18)
at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:230)
at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:304)
at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:324)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)

org.apache.spark.SparkException: Job aborted due to stage failure: Task 12 in stage 7.0 failed 1 times, most recent failure: Lost task 12.0 in stage 7.0 (TID 43, localhost, executor driver): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.OutOfMemoryError: Java heap space
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.StringCoding.decode(StringCoding.java:254)
at java.lang.String.(String.java:546)
at is.hail.expr.ir.GenericLine.toString(GenericLines.scala:322)
at is.hail.io.vcf.MatrixVCFReader$$anonfun$21$$anonfun$apply$10$$anonfun$apply$11.apply(LoadVCF.scala:1732)
at is.hail.io.vcf.MatrixVCFReader$$anonfun$21$$anonfun$apply$10$$anonfun$apply$11.apply(LoadVCF.scala:1731)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:464)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anon$1.hasNext(RichContextRDD.scala:31)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:128)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
… 10 more

Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1889)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1877)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1876)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1876)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:926)
at scala.Option.foreach(Option.scala:257)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:926)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2110)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2059)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2048)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:737)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2114)
at org.apache.spark.internal.io.SparkHadoopWriter$.write(SparkHadoopWriter.scala:78)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1096)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopDataset$1.apply(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopDataset(PairRDDFunctions.scala:1094)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply$mcV$sp(PairRDDFunctions.scala:1067)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$4.apply(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:1032)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply$mcV$sp(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsHadoopFile$1.apply(PairRDDFunctions.scala:958)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.PairRDDFunctions.saveAsHadoopFile(PairRDDFunctions.scala:957)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply$mcV$sp(RDD.scala:1499)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDD$$anonfun$saveAsTextFile$1.apply(RDD.scala:1478)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
at org.apache.spark.rdd.RDD.saveAsTextFile(RDD.scala:1478)
at is.hail.utils.richUtils.RichRDD$.writeTable$extension(RichRDD.scala:78)
at is.hail.expr.ir.TableValue.export(TableValue.scala:98)
at is.hail.expr.ir.TableTextWriter.apply(TableWriter.scala:337)
at is.hail.expr.ir.Interpret$.run(Interpret.scala:811)
at is.hail.expr.ir.Interpret$.alreadyLowered(Interpret.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.interpretAndCoerce$1(InterpretNonCompilable.scala:16)
at is.hail.expr.ir.InterpretNonCompilable$.is$hail$expr$ir$InterpretNonCompilable$$rewrite$1(InterpretNonCompilable.scala:53)
at is.hail.expr.ir.InterpretNonCompilable$.apply(InterpretNonCompilable.scala:58)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.transform(LoweringPass.scala:56)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3$$anonfun$1.apply(LoweringPass.scala:15)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:15)
at is.hail.expr.ir.lowering.LoweringPass$$anonfun$apply$3.apply(LoweringPass.scala:13)
at is.hail.utils.ExecutionTimer.time(ExecutionTimer.scala:69)
at is.hail.expr.ir.lowering.LoweringPass$class.apply(LoweringPass.scala:13)
at is.hail.expr.ir.lowering.InterpretNonCompilablePass$.apply(LoweringPass.scala:51)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:14)
at is.hail.expr.ir.lowering.LoweringPipeline$$anonfun$apply$1.apply(LoweringPipeline.scala:12)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at is.hail.expr.ir.lowering.LoweringPipeline.apply(LoweringPipeline.scala:12)
at is.hail.expr.ir.CompileAndEvaluate$._apply(CompileAndEvaluate.scala:28)
at is.hail.backend.spark.SparkBackend.is$hail$backend$spark$SparkBackend$$_execute(SparkBackend.scala:318)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:305)
at is.hail.backend.spark.SparkBackend$$anonfun$execute$1.apply(SparkBackend.scala:304)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:20)
at is.hail.expr.ir.ExecuteContext$$anonfun$scoped$1.apply(ExecuteContext.scala:18)
at is.hail.utils.package$.using(package.scala:602)
at is.hail.annotations.Region$.scoped(Region.scala:18)
at is.hail.expr.ir.ExecuteContext$.scoped(ExecuteContext.scala:18)
at is.hail.backend.spark.SparkBackend.withExecuteContext(SparkBackend.scala:230)
at is.hail.backend.spark.SparkBackend.execute(SparkBackend.scala:304)
at is.hail.backend.spark.SparkBackend.executeJSON(SparkBackend.scala:324)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)

org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:155)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

java.lang.OutOfMemoryError: Java heap space
at java.lang.StringCoding$StringDecoder.decode(StringCoding.java:149)
at java.lang.StringCoding.decode(StringCoding.java:193)
at java.lang.StringCoding.decode(StringCoding.java:254)
at java.lang.String.(String.java:546)
at is.hail.expr.ir.GenericLine.toString(GenericLines.scala:322)
at is.hail.io.vcf.MatrixVCFReader$$anonfun$21$$anonfun$apply$10$$anonfun$apply$11.apply(LoadVCF.scala:1732)
at is.hail.io.vcf.MatrixVCFReader$$anonfun$21$$anonfun$apply$10$$anonfun$apply$11.apply(LoadVCF.scala:1731)
at scala.collection.Iterator$$anon$13.hasNext(Iterator.scala:464)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at is.hail.utils.richUtils.RichContextRDD$$anonfun$cleanupRegions$1$$anon$1.hasNext(RichContextRDD.scala:31)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:439)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:409)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:128)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$4.apply(SparkHadoopWriter.scala:127)
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1394)
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:139)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:83)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:78)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
at org.apache.spark.scheduler.Task.run(Task.scala:121)
at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:403)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:409)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

Hail version: 0.2.49-11ae8408bad0
Error summary: OutOfMemoryError: Java heap space

OK, thanks. This is super confusing.

Can you access the Spark web UI to look at memory settings? I’ve never seen an OOM from a VCF like this.

Sorry for the delayed response, it took me a while to figure out how to access the Spark web UI. Is this what you’re looking for? Otherwise, may I ask where can I find the memory settings? Thanks!

This indicates that you’re running on one machine with 32 cores, and have less than 400 MB (!) allocated for the entire process. This should solve things:

Thanks. I often put the code in a Python script and executed it with the “python3” command line argument. In this case, am I supposed to put

export PYSPARK_SUBMIT_ARGS="–driver-memory 8g pyspark-shell"

in all my Python scripts? My apology if the question sounds stupid, but I’m not very familiar with Python…

no, this is a shell command. You can also do:

PYSPARK_SUBMIT_ARGS="–driver-memory 8g pyspark-shell" python3

Also, you’ll probably want more than 8G for a machine with 32 cores. I’m not sure how much memory your machine has, though.

Sounds good, this is really helpful. Thanks a lot!