There seems to be a lot of weird interplay between Spark / s3 / s3a. I don’t really have time to look into this, but I think you’re looking for something like the spark.hadoop.fs.s3a.connection.maximum
configuration parameter, but for s3
There seems to be a lot of weird interplay between Spark / s3 / s3a. I don’t really have time to look into this, but I think you’re looking for something like the spark.hadoop.fs.s3a.connection.maximum
configuration parameter, but for s3