Name | Default | Meaning | Since Version |
---|---|---|---|
spark.sql.files.maxPartitionBytes | 128 MB | The maximum number of bytes to pack into a single partition when reading files | 2.0.0 |
spark.sql.sources.bucketing.enabled | true | When false, we will treat bucketed table as normal table | 2.0.0 |
spark.sql.sources.bucketing.autoBucketedScan.enabled | true | Whe true, decide whether to do bucketed scan on input tables based on query plan automatically | 3.1.1 |
spark.sql.limit.scaleUpFactor | 4 | Minimal increase rate in number of partitions between attempts when executing a take on a query | 2.1.1 |
Last active
March 15, 2022 12:08
-
-
Save manuzhang/0571e4f73673b2385de2514c088077c3 to your computer and use it in GitHub Desktop.
Configs for taking limit rows from bucketed table
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment