There are 2 datanode(s) running and no node(s) are excluded in this operation

Hello, While doing the sample-qc of 20k wes data, the following code is giving error;

relatedness_ht = hl.pc_relate(filtered_mt.GT, min_individual_maf=0.01, scores_expr=pca_scores[filtered_mt.col_key].scores,
                                      block_size=4096, min_kinship=0.088, statistics='kin')
related_samples_to_remove = hl.maximal_independent_set(pairs.i, pairs.j, False)

Im facing the following error;

Java stack trace:
org.apache.spark.SparkException: Job aborted due to stage failure: Task 172 in stage 23.0 failed 20 times, most recent failure: Lost task 172.19 in stage 23.0 (TID 13239, sample-qc-sw-prt7.us-central1-f.c.cncd-cncd.internal, executor 19): org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /tmp/pcrelate-write-read-8cB5l6EDjAw39fHZuyS79m.bm/parts/part-0172-23-172-19-2ecd08d9-824d-6cfc-578b-f26d7e110ea3 could only be replicated to 0 nodes instead of minReplication (=1).  There are 2 datanode(s) running and no node(s) are excluded in this operation.

What should i do?
Thanks.