Created
June 11, 2020 20:19
-
-
Save lordpretzel/4baa399d79dbd183a0645866e4e74ac6 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[info] Compiling 1 Scala source to /Users/lord_pretzel/Documents/workspace/mimir-caveats/target/scala-2.12/test-classes ... | |
[info] Done compiling. | |
[success] Total time: 1 s, completed Jun 11, 2020, 3:17:21 PM | |
sbt:mimir-caveats> testOnly org.mimirdb.caveats.LogicalPlanRangeSpec -- ex "certain inputs.aggregation - no group-by - aggregtion functions only" | |
[info] LogicalPlanRangeSpec | |
[info] DataFrame Range Annotations | |
[info] Certain inputs | |
WARNING: An illegal reflective access operation has occurred | |
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/lord_pretzel/Library/Caches/Coursier/v1/https/repo1.maven.org/maven2/org/apache/spark/spark-unsafe_2.12/3.0.0-preview2/spark-unsafe_2.12-3.0.0-preview2.jar) to constructor java.nio.DirectByteBuffer(long,int) | |
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform | |
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations | |
WARNING: All illegal access operations will be denied in a future release | |
REWRITING PLAN OPERATOR: Project [X#824] | |
+- Aggregate [avg(cast(A#14 as double)) AS X#824] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
REWRITING PLAN OPERATOR: Aggregate [avg(cast(A#14 as double)) AS X#824] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
REWRITING PLAN OPERATOR: RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
======================================== | |
REWRITE OPERATOR TYPE LEAF NODE | |
======================================== | |
-------------------------- | |
REWRITTEN OPERATOR: | |
-------------------------- | |
'Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, 'A AS __CAVEATS_A_LB#833, 'A AS __CAVEATS_A_UB#834, 'B AS __CAVEATS_B_LB#835, 'B AS __CAVEATS_B_UB#836, 'C AS __CAVEATS_C_LB#837, 'C AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
---------------------------------------- | |
EXPR: avg(cast(A#14 as double)) AS X#824 | |
GB: None | |
trace: true | |
---------------------------------------- | |
EXPR: avg(cast(A#14 as double)) | |
GB: None | |
trace: true | |
---------------------------------------- | |
EXPR: sum(cast(A#14 as double)) | |
GB: None | |
trace: true | |
===========> BG EQUALS: true | |
GROUP BY: None | |
---------------------------------------- | |
EXPR: cast(A#14 as double) | |
GB: None | |
trace: true | |
---------------------------------------- | |
EXPR: A#14 | |
GB: None | |
trace: true | |
sum(CASE WHEN (`__CAVEATS_ROW_LB` > 0) THEN (CAST(`__CAVEATS_A_LB` AS DOUBLE) * CASE WHEN (CAST(`__CAVEATS_A_LB` AS DOUBLE) < 0) THEN `__CAVEATS_ROW_UB` ELSE `__CAVEATS_ROW_LB` END) ELSE least(0.0D, (CAST(`__CAVEATS_A_LB` AS DOUBLE) * CASE WHEN (CAST(`__CAVEATS_A_LB` AS DOUBLE) < 0) THEN `__CAVEATS_ROW_UB` ELSE `__CAVEATS_ROW_LB` END)) END) | |
sum(CASE WHEN true THEN (CAST(`A` AS DOUBLE) * `__CAVEATS_ROW_BG`) ELSE 0.0D END) | |
sum(CASE WHEN (`__CAVEATS_ROW_LB` > 0) THEN (CAST(`__CAVEATS_A_UB` AS DOUBLE) * CASE WHEN (CAST(`__CAVEATS_A_UB` AS DOUBLE) > 0) THEN `__CAVEATS_ROW_UB` ELSE `__CAVEATS_ROW_LB` END) ELSE greatest(0.0D, (CAST(`__CAVEATS_A_UB` AS DOUBLE) * CASE WHEN (CAST(`__CAVEATS_A_UB` AS DOUBLE) > 0) THEN `__CAVEATS_ROW_UB` ELSE `__CAVEATS_ROW_LB` END)) END) | |
---------------------------------------- | |
EXPR: count(1) | |
GB: None | |
trace: true | |
===========> BG EQUALS: true | |
GROUP BY: None | |
-------------------------- | |
REWRITTEN OPERATOR: | |
-------------------------- | |
'Aggregate [CASE WHEN (sum('__CAVEATS_ROW_BG) = 0) THEN 0.0 ELSE (sum(CASE WHEN true THEN (cast('A as double) * '__CAVEATS_ROW_BG) ELSE 0.0 END) / cast(sum('__CAVEATS_ROW_BG) as double)) END AS X#843, 1 AS __CAVEATS_ROW_LB#839, 1 AS __CAVEATS_ROW_BG#840, 1 AS __CAVEATS_ROW_UB#841, CASE WHEN (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN '__CAVEATS_ROW_LB ELSE least(0, '__CAVEATS_ROW_LB) END) = 0) THEN 0.0 ELSE (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN (cast('__CAVEATS_A_LB as double) * CASE WHEN (cast('__CAVEATS_A_LB as double) < 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END) ELSE least(0.0, (cast('__CAVEATS_A_LB as double) * CASE WHEN (cast('__CAVEATS_A_LB as double) < 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END)) END) / cast(sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN '__CAVEATS_ROW_LB ELSE least(0, '__CAVEATS_ROW_LB) END) as double)) END AS __CAVEATS_X_LB#842, CASE WHEN (sum('__CAVEATS_ROW_UB) = 0) THEN 0.0 ELSE (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN (cast('__CAVEATS_A_UB as double) * CASE WHEN (cast('__CAVEATS_A_UB as double) > 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END) ELSE greatest(0.0, (cast('__CAVEATS_A_UB as double) * CASE WHEN (cast('__CAVEATS_A_UB as double) > 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END)) END) / cast(sum('__CAVEATS_ROW_UB) as double)) END AS __CAVEATS_X_UB#844] | |
+- 'Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, 'A AS __CAVEATS_A_LB#833, 'A AS __CAVEATS_A_UB#834, 'B AS __CAVEATS_B_LB#835, 'B AS __CAVEATS_B_UB#836, 'C AS __CAVEATS_C_LB#837, 'C AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
---------------------------------------- | |
EXPR: X#824 | |
GB: None | |
trace: true | |
bestGuess: ArrayBuffer('X AS X#845) | |
======================================== | |
REWRITE OPERATOR TYPE PROJECT | |
======================================== | |
-------------------------- | |
REWRITTEN OPERATOR: | |
-------------------------- | |
'Project ['X AS X#845, '__CAVEATS_ROW_LB AS __CAVEATS_ROW_LB#846, '__CAVEATS_ROW_BG AS __CAVEATS_ROW_BG#847, '__CAVEATS_ROW_UB AS __CAVEATS_ROW_UB#848, '__CAVEATS_X_LB AS __CAVEATS_X_LB#849, '__CAVEATS_X_UB AS __CAVEATS_X_UB#850] | |
+- 'Aggregate [CASE WHEN (sum('__CAVEATS_ROW_BG) = 0) THEN 0.0 ELSE (sum(CASE WHEN true THEN (cast('A as double) * '__CAVEATS_ROW_BG) ELSE 0.0 END) / cast(sum('__CAVEATS_ROW_BG) as double)) END AS X#843, 1 AS __CAVEATS_ROW_LB#839, 1 AS __CAVEATS_ROW_BG#840, 1 AS __CAVEATS_ROW_UB#841, CASE WHEN (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN '__CAVEATS_ROW_LB ELSE least(0, '__CAVEATS_ROW_LB) END) = 0) THEN 0.0 ELSE (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN (cast('__CAVEATS_A_LB as double) * CASE WHEN (cast('__CAVEATS_A_LB as double) < 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END) ELSE least(0.0, (cast('__CAVEATS_A_LB as double) * CASE WHEN (cast('__CAVEATS_A_LB as double) < 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END)) END) / cast(sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN '__CAVEATS_ROW_LB ELSE least(0, '__CAVEATS_ROW_LB) END) as double)) END AS __CAVEATS_X_LB#842, CASE WHEN (sum('__CAVEATS_ROW_UB) = 0) THEN 0.0 ELSE (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN (cast('__CAVEATS_A_UB as double) * CASE WHEN (cast('__CAVEATS_A_UB as double) > 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END) ELSE greatest(0.0, (cast('__CAVEATS_A_UB as double) * CASE WHEN (cast('__CAVEATS_A_UB as double) > 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END)) END) / cast(sum('__CAVEATS_ROW_UB) as double)) END AS __CAVEATS_X_UB#844] | |
+- 'Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, 'A AS __CAVEATS_A_LB#833, 'A AS __CAVEATS_A_UB#834, 'B AS __CAVEATS_B_LB#835, 'B AS __CAVEATS_B_UB#836, 'C AS __CAVEATS_C_LB#837, 'C AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
is already annotated? false | |
base schema: StructType(StructField(X,DoubleType,true)) | |
row encoder StructType(StructField(X,DoubleType,true), StructField(__CAVEATS_ROW_LB,IntegerType,false), StructField(__CAVEATS_ROW_BG,IntegerType,false), StructField(__CAVEATS_ROW_UB,IntegerType,false), StructField(__CAVEATS_X_LB,DoubleType,true), StructField(__CAVEATS_X_UB,DoubleType,true)) | |
================================================================================ | |
FINAL | |
================================================================================ | |
============================== QUERY EXECUTION (PLANS) ============================== | |
== Parsed Logical Plan == | |
'Project ['X AS X#845, '__CAVEATS_ROW_LB AS __CAVEATS_ROW_LB#846, '__CAVEATS_ROW_BG AS __CAVEATS_ROW_BG#847, '__CAVEATS_ROW_UB AS __CAVEATS_ROW_UB#848, '__CAVEATS_X_LB AS __CAVEATS_X_LB#849, '__CAVEATS_X_UB AS __CAVEATS_X_UB#850] | |
+- 'Aggregate [CASE WHEN (sum('__CAVEATS_ROW_BG) = 0) THEN 0.0 ELSE (sum(CASE WHEN true THEN (cast('A as double) * '__CAVEATS_ROW_BG) ELSE 0.0 END) / cast(sum('__CAVEATS_ROW_BG) as double)) END AS X#843, 1 AS __CAVEATS_ROW_LB#839, 1 AS __CAVEATS_ROW_BG#840, 1 AS __CAVEATS_ROW_UB#841, CASE WHEN (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN '__CAVEATS_ROW_LB ELSE least(0, '__CAVEATS_ROW_LB) END) = 0) THEN 0.0 ELSE (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN (cast('__CAVEATS_A_LB as double) * CASE WHEN (cast('__CAVEATS_A_LB as double) < 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END) ELSE least(0.0, (cast('__CAVEATS_A_LB as double) * CASE WHEN (cast('__CAVEATS_A_LB as double) < 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END)) END) / cast(sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN '__CAVEATS_ROW_LB ELSE least(0, '__CAVEATS_ROW_LB) END) as double)) END AS __CAVEATS_X_LB#842, CASE WHEN (sum('__CAVEATS_ROW_UB) = 0) THEN 0.0 ELSE (sum(CASE WHEN ('__CAVEATS_ROW_LB > 0) THEN (cast('__CAVEATS_A_UB as double) * CASE WHEN (cast('__CAVEATS_A_UB as double) > 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END) ELSE greatest(0.0, (cast('__CAVEATS_A_UB as double) * CASE WHEN (cast('__CAVEATS_A_UB as double) > 0) THEN '__CAVEATS_ROW_UB ELSE '__CAVEATS_ROW_LB END)) END) / cast(sum('__CAVEATS_ROW_UB) as double)) END AS __CAVEATS_X_UB#844] | |
+- 'Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, 'A AS __CAVEATS_A_LB#833, 'A AS __CAVEATS_A_UB#834, 'B AS __CAVEATS_B_LB#835, 'B AS __CAVEATS_B_UB#836, 'C AS __CAVEATS_C_LB#837, 'C AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
== Analyzed Logical Plan == | |
X: double, __CAVEATS_ROW_LB: int, __CAVEATS_ROW_BG: int, __CAVEATS_ROW_UB: int, __CAVEATS_X_LB: double, __CAVEATS_X_UB: double | |
Project [X#843 AS X#845, __CAVEATS_ROW_LB#839 AS __CAVEATS_ROW_LB#846, __CAVEATS_ROW_BG#840 AS __CAVEATS_ROW_BG#847, __CAVEATS_ROW_UB#841 AS __CAVEATS_ROW_UB#848, __CAVEATS_X_LB#842 AS __CAVEATS_X_LB#849, __CAVEATS_X_UB#844 AS __CAVEATS_X_UB#850] | |
+- Aggregate [CASE WHEN (sum(cast(__CAVEATS_ROW_BG#831 as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN true THEN (cast(A#14 as double) * cast(__CAVEATS_ROW_BG#831 as double)) ELSE 0.0 END) / cast(sum(cast(__CAVEATS_ROW_BG#831 as bigint)) as double)) END AS X#843, 1 AS __CAVEATS_ROW_LB#839, 1 AS __CAVEATS_ROW_BG#840, 1 AS __CAVEATS_ROW_UB#841, CASE WHEN (sum(cast(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN __CAVEATS_ROW_LB#830 ELSE least(0, __CAVEATS_ROW_LB#830) END as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN (cast(__CAVEATS_A_LB#833 as double) * cast(CASE WHEN (cast(__CAVEATS_A_LB#833 as double) < cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double)) ELSE least(0.0, (cast(__CAVEATS_A_LB#833 as double) * cast(CASE WHEN (cast(__CAVEATS_A_LB#833 as double) < cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double))) END) / cast(sum(cast(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN __CAVEATS_ROW_LB#830 ELSE least(0, __CAVEATS_ROW_LB#830) END as bigint)) as double)) END AS __CAVEATS_X_LB#842, CASE WHEN (sum(cast(__CAVEATS_ROW_UB#832 as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN (cast(__CAVEATS_A_UB#834 as double) * cast(CASE WHEN (cast(__CAVEATS_A_UB#834 as double) > cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double)) ELSE greatest(0.0, (cast(__CAVEATS_A_UB#834 as double) * cast(CASE WHEN (cast(__CAVEATS_A_UB#834 as double) > cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double))) END) / cast(sum(cast(__CAVEATS_ROW_UB#832 as bigint)) as double)) END AS __CAVEATS_X_UB#844] | |
+- Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834, B#15 AS __CAVEATS_B_LB#835, B#15 AS __CAVEATS_B_UB#836, C#16 AS __CAVEATS_C_LB#837, C#16 AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
== Optimized Logical Plan == | |
Aggregate [CASE WHEN (sum(1) = 0) THEN 0.0 ELSE (sum((cast(A#14 as double) * 1.0)) / cast(sum(1) as double)) END AS X#845, 1 AS __CAVEATS_ROW_LB#846, 1 AS __CAVEATS_ROW_BG#847, 1 AS __CAVEATS_ROW_UB#848, CASE WHEN (sum(1) = 0) THEN 0.0 ELSE (sum((cast(__CAVEATS_A_LB#833 as double) * 1.0)) / cast(sum(1) as double)) END AS __CAVEATS_X_LB#849, CASE WHEN (sum(1) = 0) THEN 0.0 ELSE (sum((cast(__CAVEATS_A_UB#834 as double) * 1.0)) / cast(sum(1) as double)) END AS __CAVEATS_X_UB#850] | |
+- Project [A#14, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834] | |
+- RelationV2[A#14] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
== Physical Plan == | |
*(1) HashAggregate(keys=[], functions=[sum(1), sum((cast(A#14 as double) * 1.0)), sum((cast(__CAVEATS_A_LB#833 as double) * 1.0)), sum((cast(__CAVEATS_A_UB#834 as double) * 1.0))], output=[X#845, __CAVEATS_ROW_LB#846, __CAVEATS_ROW_BG#847, __CAVEATS_ROW_UB#848, __CAVEATS_X_LB#849, __CAVEATS_X_UB#850]) | |
+- *(1) HashAggregate(keys=[], functions=[partial_sum(1), partial_sum((cast(A#14 as double) * 1.0)), partial_sum((cast(__CAVEATS_A_LB#833 as double) * 1.0)), partial_sum((cast(__CAVEATS_A_UB#834 as double) * 1.0))], output=[sum#862L, sum#863, sum#864, sum#865]) | |
+- *(1) Project [A#14, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834] | |
+- BatchScan[A#14] CSVScan Location: InMemoryFileIndex[file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv], ReadSchema: struct<A:string> | |
============================== SCHEMA ============================== | |
StructType(StructField(X,DoubleType,true), StructField(__CAVEATS_ROW_LB,IntegerType,false), StructField(__CAVEATS_ROW_BG,IntegerType,false), StructField(__CAVEATS_ROW_UB,IntegerType,false), StructField(__CAVEATS_X_LB,DoubleType,true), StructField(__CAVEATS_X_UB,DoubleType,true)) | |
============================== RESULT ============================== | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
| X|__CAVEATS_ROW_LB|__CAVEATS_ROW_BG|__CAVEATS_ROW_UB|__CAVEATS_X_LB|__CAVEATS_X_UB| | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
|1.0| 1| 1| 1| 1.0| 1.0| | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
================================================================================ | |
QUERY | |
================================================================================ | |
Project [X#843 AS X#845, __CAVEATS_ROW_LB#839 AS __CAVEATS_ROW_LB#846, __CAVEATS_ROW_BG#840 AS __CAVEATS_ROW_BG#847, __CAVEATS_ROW_UB#841 AS __CAVEATS_ROW_UB#848, __CAVEATS_X_LB#842 AS __CAVEATS_X_LB#849, __CAVEATS_X_UB#844 AS __CAVEATS_X_UB#850] | |
+- Aggregate [CASE WHEN (sum(cast(__CAVEATS_ROW_BG#831 as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN true THEN (cast(A#14 as double) * cast(__CAVEATS_ROW_BG#831 as double)) ELSE 0.0 END) / cast(sum(cast(__CAVEATS_ROW_BG#831 as bigint)) as double)) END AS X#843, 1 AS __CAVEATS_ROW_LB#839, 1 AS __CAVEATS_ROW_BG#840, 1 AS __CAVEATS_ROW_UB#841, CASE WHEN (sum(cast(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN __CAVEATS_ROW_LB#830 ELSE least(0, __CAVEATS_ROW_LB#830) END as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN (cast(__CAVEATS_A_LB#833 as double) * cast(CASE WHEN (cast(__CAVEATS_A_LB#833 as double) < cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double)) ELSE least(0.0, (cast(__CAVEATS_A_LB#833 as double) * cast(CASE WHEN (cast(__CAVEATS_A_LB#833 as double) < cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double))) END) / cast(sum(cast(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN __CAVEATS_ROW_LB#830 ELSE least(0, __CAVEATS_ROW_LB#830) END as bigint)) as double)) END AS __CAVEATS_X_LB#842, CASE WHEN (sum(cast(__CAVEATS_ROW_UB#832 as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN (cast(__CAVEATS_A_UB#834 as double) * cast(CASE WHEN (cast(__CAVEATS_A_UB#834 as double) > cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double)) ELSE greatest(0.0, (cast(__CAVEATS_A_UB#834 as double) * cast(CASE WHEN (cast(__CAVEATS_A_UB#834 as double) > cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double))) END) / cast(sum(cast(__CAVEATS_ROW_UB#832 as bigint)) as double)) END AS __CAVEATS_X_UB#844] | |
+- Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834, B#15 AS __CAVEATS_B_LB#835, B#15 AS __CAVEATS_B_UB#836, C#16 AS __CAVEATS_C_LB#837, C#16 AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
================================================================================ | |
RESULT | |
================================================================================ | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
| X|__CAVEATS_ROW_LB|__CAVEATS_ROW_BG|__CAVEATS_ROW_UB|__CAVEATS_X_LB|__CAVEATS_X_UB| | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
|1.0| 1| 1| 1| 1.0| 1.0| | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
================================================================================ | |
QUERY | |
================================================================================ | |
Aggregate [CASE WHEN (sum(cast(__CAVEATS_ROW_BG#831 as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN true THEN (cast(A#14 as double) * cast(__CAVEATS_ROW_BG#831 as double)) ELSE 0.0 END) / cast(sum(cast(__CAVEATS_ROW_BG#831 as bigint)) as double)) END AS X#843, 1 AS __CAVEATS_ROW_LB#839, 1 AS __CAVEATS_ROW_BG#840, 1 AS __CAVEATS_ROW_UB#841, CASE WHEN (sum(cast(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN __CAVEATS_ROW_LB#830 ELSE least(0, __CAVEATS_ROW_LB#830) END as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN (cast(__CAVEATS_A_LB#833 as double) * cast(CASE WHEN (cast(__CAVEATS_A_LB#833 as double) < cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double)) ELSE least(0.0, (cast(__CAVEATS_A_LB#833 as double) * cast(CASE WHEN (cast(__CAVEATS_A_LB#833 as double) < cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double))) END) / cast(sum(cast(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN __CAVEATS_ROW_LB#830 ELSE least(0, __CAVEATS_ROW_LB#830) END as bigint)) as double)) END AS __CAVEATS_X_LB#842, CASE WHEN (sum(cast(__CAVEATS_ROW_UB#832 as bigint)) = cast(0 as bigint)) THEN 0.0 ELSE (sum(CASE WHEN (__CAVEATS_ROW_LB#830 > 0) THEN (cast(__CAVEATS_A_UB#834 as double) * cast(CASE WHEN (cast(__CAVEATS_A_UB#834 as double) > cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double)) ELSE greatest(0.0, (cast(__CAVEATS_A_UB#834 as double) * cast(CASE WHEN (cast(__CAVEATS_A_UB#834 as double) > cast(0 as double)) THEN __CAVEATS_ROW_UB#832 ELSE __CAVEATS_ROW_LB#830 END as double))) END) / cast(sum(cast(__CAVEATS_ROW_UB#832 as bigint)) as double)) END AS __CAVEATS_X_UB#844] | |
+- Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834, B#15 AS __CAVEATS_B_LB#835, B#15 AS __CAVEATS_B_UB#836, C#16 AS __CAVEATS_C_LB#837, C#16 AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
================================================================================ | |
RESULT | |
================================================================================ | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
| X|__CAVEATS_ROW_LB|__CAVEATS_ROW_BG|__CAVEATS_ROW_UB|__CAVEATS_X_LB|__CAVEATS_X_UB| | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
|1.0| 1| 1| 1| 1.0| 1.0| | |
+---+----------------+----------------+----------------+--------------+--------------+ | |
================================================================================ | |
QUERY | |
================================================================================ | |
Project [A#14, B#15, C#16, 1 AS __CAVEATS_ROW_LB#830, 1 AS __CAVEATS_ROW_BG#831, 1 AS __CAVEATS_ROW_UB#832, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834, B#15 AS __CAVEATS_B_LB#835, B#15 AS __CAVEATS_B_UB#836, C#16 AS __CAVEATS_C_LB#837, C#16 AS __CAVEATS_C_UB#838] | |
+- RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
================================================================================ | |
RESULT | |
================================================================================ | |
+---+----+----+----------------+----------------+----------------+--------------+--------------+--------------+--------------+--------------+--------------+ | |
| A| B| C|__CAVEATS_ROW_LB|__CAVEATS_ROW_BG|__CAVEATS_ROW_UB|__CAVEATS_A_LB|__CAVEATS_A_UB|__CAVEATS_B_LB|__CAVEATS_B_UB|__CAVEATS_C_LB|__CAVEATS_C_UB| | |
+---+----+----+----------------+----------------+----------------+--------------+--------------+--------------+--------------+--------------+--------------+ | |
| 1| 2| 3| 1| 1| 1| 1| 1| 2| 2| 3| 3| | |
| 1| 3| 1| 1| 1| 1| 1| 1| 3| 3| 1| 1| | |
| 2|null| 1| 1| 1| 1| 2| 2| null| null| 1| 1| | |
| 1| 2|null| 1| 1| 1| 1| 1| 2| 2| null| null| | |
| 1| 4| 2| 1| 1| 1| 1| 1| 4| 4| 2| 2| | |
| 2| 2| 1| 1| 1| 1| 2| 2| 2| 2| 1| 1| | |
| 4| 2| 4| 1| 1| 1| 4| 4| 2| 2| 4| 4| | |
+---+----+----+----------------+----------------+----------------+--------------+--------------+--------------+--------------+--------------+--------------+ | |
================================================================================ | |
QUERY | |
================================================================================ | |
RelationV2[A#14, B#15, C#16] csv file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv | |
================================================================================ | |
RESULT | |
================================================================================ | |
+---+----+----+ | |
| A| B| C| | |
+---+----+----+ | |
| 1| 2| 3| | |
| 1| 3| 1| | |
| 2|null| 1| | |
| 1| 2|null| | |
| 1| 4| 2| | |
| 2| 2| 1| | |
| 4| 2| 4| | |
+---+----+----+ | |
Found 1 WholeStageCodegen subtrees. | |
== Subtree 1 / 1 (maxMethodCodeSize:636; maxConstantPoolSize:289(0.44% used); numInnerClasses:0) == | |
*(1) HashAggregate(keys=[], functions=[sum(1), sum((cast(A#14 as double) * 1.0)), sum((cast(__CAVEATS_A_LB#833 as double) * 1.0)), sum((cast(__CAVEATS_A_UB#834 as double) * 1.0))], output=[X#845, __CAVEATS_ROW_LB#846, __CAVEATS_ROW_BG#847, __CAVEATS_ROW_UB#848, __CAVEATS_X_LB#849, __CAVEATS_X_UB#850]) | |
+- *(1) HashAggregate(keys=[], functions=[partial_sum(1), partial_sum((cast(A#14 as double) * 1.0)), partial_sum((cast(__CAVEATS_A_LB#833 as double) * 1.0)), partial_sum((cast(__CAVEATS_A_UB#834 as double) * 1.0))], output=[sum#862L, sum#863, sum#864, sum#865]) | |
+- *(1) Project [A#14, A#14 AS __CAVEATS_A_LB#833, A#14 AS __CAVEATS_A_UB#834] | |
+- BatchScan[A#14] CSVScan Location: InMemoryFileIndex[file:/Users/lord_pretzel/Documents/workspace/mimir-caveats/test_data/r.csv], ReadSchema: struct<A:string> | |
Generated code: | |
/* 001 */ public Object generate(Object[] references) { | |
/* 002 */ return new GeneratedIteratorForCodegenStage1(references); | |
/* 003 */ } | |
/* 004 */ | |
/* 005 */ // codegenStageId=1 | |
/* 006 */ final class GeneratedIteratorForCodegenStage1 extends org.apache.spark.sql.execution.BufferedRowIterator { | |
/* 007 */ private Object[] references; | |
/* 008 */ private scala.collection.Iterator[] inputs; | |
/* 009 */ private boolean agg_initAgg_0; | |
/* 010 */ private boolean agg_bufIsNull_0; | |
/* 011 */ private long agg_bufValue_0; | |
/* 012 */ private boolean agg_bufIsNull_1; | |
/* 013 */ private double agg_bufValue_1; | |
/* 014 */ private boolean agg_bufIsNull_2; | |
/* 015 */ private double agg_bufValue_2; | |
/* 016 */ private boolean agg_bufIsNull_3; | |
/* 017 */ private double agg_bufValue_3; | |
/* 018 */ private boolean agg_initAgg_1; | |
/* 019 */ private boolean agg_bufIsNull_4; | |
/* 020 */ private long agg_bufValue_4; | |
/* 021 */ private boolean agg_bufIsNull_5; | |
/* 022 */ private double agg_bufValue_5; | |
/* 023 */ private boolean agg_bufIsNull_6; | |
/* 024 */ private double agg_bufValue_6; | |
/* 025 */ private boolean agg_bufIsNull_7; | |
/* 026 */ private double agg_bufValue_7; | |
/* 027 */ private scala.collection.Iterator inputadapter_input_0; | |
/* 028 */ private boolean agg_agg_isNull_46_0; | |
/* 029 */ private boolean agg_agg_isNull_50_0; | |
/* 030 */ private boolean agg_agg_isNull_52_0; | |
/* 031 */ private boolean agg_agg_isNull_60_0; | |
/* 032 */ private boolean agg_agg_isNull_62_0; | |
/* 033 */ private boolean agg_agg_isNull_70_0; | |
/* 034 */ private boolean agg_agg_isNull_72_0; | |
/* 035 */ private boolean agg_agg_isNull_88_0; | |
/* 036 */ private boolean agg_agg_isNull_90_0; | |
/* 037 */ private boolean agg_agg_isNull_95_0; | |
/* 038 */ private boolean agg_agg_isNull_97_0; | |
/* 039 */ private boolean agg_agg_isNull_102_0; | |
/* 040 */ private boolean agg_agg_isNull_104_0; | |
/* 041 */ private boolean agg_agg_isNull_109_0; | |
/* 042 */ private boolean agg_agg_isNull_111_0; | |
/* 043 */ private org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[] project_mutableStateArray_0 = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter[5]; | |
/* 044 */ | |
/* 045 */ public GeneratedIteratorForCodegenStage1(Object[] references) { | |
/* 046 */ this.references = references; | |
/* 047 */ } | |
/* 048 */ | |
/* 049 */ public void init(int index, scala.collection.Iterator[] inputs) { | |
/* 050 */ partitionIndex = index; | |
/* 051 */ this.inputs = inputs; | |
/* 052 */ | |
/* 053 */ inputadapter_input_0 = inputs[0]; | |
/* 054 */ project_mutableStateArray_0[0] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(3, 96); | |
/* 055 */ project_mutableStateArray_0[1] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(3, 96); | |
/* 056 */ project_mutableStateArray_0[2] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(4, 0); | |
/* 057 */ project_mutableStateArray_0[3] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(4, 0); | |
/* 058 */ project_mutableStateArray_0[4] = new org.apache.spark.sql.catalyst.expressions.codegen.UnsafeRowWriter(6, 0); | |
/* 059 */ | |
/* 060 */ } | |
/* 061 */ | |
/* 062 */ private void agg_doAggregate_sum_6(double agg_expr_2_1, boolean agg_exprIsNull_2_1) throws java.io.IOException { | |
/* 063 */ // do aggregate for sum | |
/* 064 */ // evaluate aggregate function | |
/* 065 */ agg_agg_isNull_102_0 = true; | |
/* 066 */ double agg_value_102 = -1.0; | |
/* 067 */ do { | |
/* 068 */ boolean agg_isNull_103 = true; | |
/* 069 */ double agg_value_103 = -1.0; | |
/* 070 */ agg_agg_isNull_104_0 = true; | |
/* 071 */ double agg_value_104 = -1.0; | |
/* 072 */ do { | |
/* 073 */ if (!agg_bufIsNull_2) { | |
/* 074 */ agg_agg_isNull_104_0 = false; | |
/* 075 */ agg_value_104 = agg_bufValue_2; | |
/* 076 */ continue; | |
/* 077 */ } | |
/* 078 */ | |
/* 079 */ if (!false) { | |
/* 080 */ agg_agg_isNull_104_0 = false; | |
/* 081 */ agg_value_104 = 0.0D; | |
/* 082 */ continue; | |
/* 083 */ } | |
/* 084 */ | |
/* 085 */ } while (false); | |
/* 086 */ | |
/* 087 */ if (!agg_exprIsNull_2_1) { | |
/* 088 */ agg_isNull_103 = false; // resultCode could change nullability. | |
/* 089 */ | |
/* 090 */ agg_value_103 = agg_value_104 + agg_expr_2_1; | |
/* 091 */ | |
/* 092 */ } | |
/* 093 */ if (!agg_isNull_103) { | |
/* 094 */ agg_agg_isNull_102_0 = false; | |
/* 095 */ agg_value_102 = agg_value_103; | |
/* 096 */ continue; | |
/* 097 */ } | |
/* 098 */ | |
/* 099 */ if (!agg_bufIsNull_2) { | |
/* 100 */ agg_agg_isNull_102_0 = false; | |
/* 101 */ agg_value_102 = agg_bufValue_2; | |
/* 102 */ continue; | |
/* 103 */ } | |
/* 104 */ | |
/* 105 */ } while (false); | |
/* 106 */ // update aggregation buffers | |
/* 107 */ agg_bufIsNull_2 = agg_agg_isNull_102_0; | |
/* 108 */ agg_bufValue_2 = agg_value_102; | |
/* 109 */ } | |
/* 110 */ | |
/* 111 */ private void agg_doAggregate_sum_0() throws java.io.IOException { | |
/* 112 */ // do aggregate for sum | |
/* 113 */ // evaluate aggregate function | |
/* 114 */ agg_agg_isNull_46_0 = true; | |
/* 115 */ long agg_value_46 = -1L; | |
/* 116 */ do { | |
/* 117 */ if (!agg_bufIsNull_4) { | |
/* 118 */ agg_agg_isNull_46_0 = false; | |
/* 119 */ agg_value_46 = agg_bufValue_4; | |
/* 120 */ continue; | |
/* 121 */ } | |
/* 122 */ | |
/* 123 */ if (!false) { | |
/* 124 */ agg_agg_isNull_46_0 = false; | |
/* 125 */ agg_value_46 = 0L; | |
/* 126 */ continue; | |
/* 127 */ } | |
/* 128 */ | |
/* 129 */ } while (false); | |
/* 130 */ | |
/* 131 */ long agg_value_45 = -1L; | |
/* 132 */ | |
/* 133 */ agg_value_45 = agg_value_46 + 1L; | |
/* 134 */ // update aggregation buffers | |
/* 135 */ agg_bufIsNull_4 = false; | |
/* 136 */ agg_bufValue_4 = agg_value_45; | |
/* 137 */ } | |
/* 138 */ | |
/* 139 */ private void agg_doAggregate_sum_3(boolean agg_exprIsNull_2_0, org.apache.spark.unsafe.types.UTF8String agg_expr_2_0) throws java.io.IOException { | |
/* 140 */ // do aggregate for sum | |
/* 141 */ // evaluate aggregate function | |
/* 142 */ agg_agg_isNull_70_0 = true; | |
/* 143 */ double agg_value_70 = -1.0; | |
/* 144 */ do { | |
/* 145 */ boolean agg_isNull_71 = true; | |
/* 146 */ double agg_value_71 = -1.0; | |
/* 147 */ agg_agg_isNull_72_0 = true; | |
/* 148 */ double agg_value_72 = -1.0; | |
/* 149 */ do { | |
/* 150 */ if (!agg_bufIsNull_7) { | |
/* 151 */ agg_agg_isNull_72_0 = false; | |
/* 152 */ agg_value_72 = agg_bufValue_7; | |
/* 153 */ continue; | |
/* 154 */ } | |
/* 155 */ | |
/* 156 */ if (!false) { | |
/* 157 */ agg_agg_isNull_72_0 = false; | |
/* 158 */ agg_value_72 = 0.0D; | |
/* 159 */ continue; | |
/* 160 */ } | |
/* 161 */ | |
/* 162 */ } while (false); | |
/* 163 */ boolean agg_isNull_75 = true; | |
/* 164 */ double agg_value_75 = -1.0; | |
/* 165 */ boolean agg_isNull_76 = agg_exprIsNull_2_0; | |
/* 166 */ double agg_value_76 = -1.0; | |
/* 167 */ if (!agg_exprIsNull_2_0) { | |
/* 168 */ final String agg_doubleStr_2 = agg_expr_2_0.toString(); | |
/* 169 */ try { | |
/* 170 */ agg_value_76 = Double.valueOf(agg_doubleStr_2); | |
/* 171 */ } catch (java.lang.NumberFormatException e) { | |
/* 172 */ final Double d = (Double) Cast.processFloatingPointSpecialLiterals(agg_doubleStr_2, false); | |
/* 173 */ if (d == null) { | |
/* 174 */ agg_isNull_76 = true; | |
/* 175 */ } else { | |
/* 176 */ agg_value_76 = d.doubleValue(); | |
/* 177 */ } | |
/* 178 */ } | |
/* 179 */ } | |
/* 180 */ if (!agg_isNull_76) { | |
/* 181 */ agg_isNull_75 = false; // resultCode could change nullability. | |
/* 182 */ | |
/* 183 */ agg_value_75 = agg_value_76 * 1.0D; | |
/* 184 */ | |
/* 185 */ } | |
/* 186 */ if (!agg_isNull_75) { | |
/* 187 */ agg_isNull_71 = false; // resultCode could change nullability. | |
/* 188 */ | |
/* 189 */ agg_value_71 = agg_value_72 + agg_value_75; | |
/* 190 */ | |
/* 191 */ } | |
/* 192 */ if (!agg_isNull_71) { | |
/* 193 */ agg_agg_isNull_70_0 = false; | |
/* 194 */ agg_value_70 = agg_value_71; | |
/* 195 */ continue; | |
/* 196 */ } | |
/* 197 */ | |
/* 198 */ if (!agg_bufIsNull_7) { | |
/* 199 */ agg_agg_isNull_70_0 = false; | |
/* 200 */ agg_value_70 = agg_bufValue_7; | |
/* 201 */ continue; | |
/* 202 */ } | |
/* 203 */ | |
/* 204 */ } while (false); | |
/* 205 */ // update aggregation buffers | |
/* 206 */ agg_bufIsNull_7 = agg_agg_isNull_70_0; | |
/* 207 */ agg_bufValue_7 = agg_value_70; | |
/* 208 */ } | |
/* 209 */ | |
/* 210 */ private void agg_doAggregateWithoutKey_0() throws java.io.IOException { | |
/* 211 */ // initialize aggregation buffer | |
/* 212 */ agg_bufIsNull_0 = true; | |
/* 213 */ agg_bufValue_0 = -1L; | |
/* 214 */ agg_bufIsNull_1 = true; | |
/* 215 */ agg_bufValue_1 = -1.0; | |
/* 216 */ agg_bufIsNull_2 = true; | |
/* 217 */ agg_bufValue_2 = -1.0; | |
/* 218 */ agg_bufIsNull_3 = true; | |
/* 219 */ agg_bufValue_3 = -1.0; | |
/* 220 */ | |
/* 221 */ while (!agg_initAgg_1) { | |
/* 222 */ agg_initAgg_1 = true; | |
/* 223 */ long agg_beforeAgg_0 = System.nanoTime(); | |
/* 224 */ agg_doAggregateWithoutKey_1(); | |
/* 225 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[1] /* aggTime */).add((System.nanoTime() - agg_beforeAgg_0) / 1000000); | |
/* 226 */ | |
/* 227 */ // output the result | |
/* 228 */ | |
/* 229 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[0] /* numOutputRows */).add(1); | |
/* 230 */ agg_doConsume_1(agg_bufValue_4, agg_bufIsNull_4, agg_bufValue_5, agg_bufIsNull_5, agg_bufValue_6, agg_bufIsNull_6, agg_bufValue_7, agg_bufIsNull_7); | |
/* 231 */ } | |
/* 232 */ | |
/* 233 */ } | |
/* 234 */ | |
/* 235 */ private void agg_doAggregate_sum_2(boolean agg_exprIsNull_1_0, org.apache.spark.unsafe.types.UTF8String agg_expr_1_0) throws java.io.IOException { | |
/* 236 */ // do aggregate for sum | |
/* 237 */ // evaluate aggregate function | |
/* 238 */ agg_agg_isNull_60_0 = true; | |
/* 239 */ double agg_value_60 = -1.0; | |
/* 240 */ do { | |
/* 241 */ boolean agg_isNull_61 = true; | |
/* 242 */ double agg_value_61 = -1.0; | |
/* 243 */ agg_agg_isNull_62_0 = true; | |
/* 244 */ double agg_value_62 = -1.0; | |
/* 245 */ do { | |
/* 246 */ if (!agg_bufIsNull_6) { | |
/* 247 */ agg_agg_isNull_62_0 = false; | |
/* 248 */ agg_value_62 = agg_bufValue_6; | |
/* 249 */ continue; | |
/* 250 */ } | |
/* 251 */ | |
/* 252 */ if (!false) { | |
/* 253 */ agg_agg_isNull_62_0 = false; | |
/* 254 */ agg_value_62 = 0.0D; | |
/* 255 */ continue; | |
/* 256 */ } | |
/* 257 */ | |
/* 258 */ } while (false); | |
/* 259 */ boolean agg_isNull_65 = true; | |
/* 260 */ double agg_value_65 = -1.0; | |
/* 261 */ boolean agg_isNull_66 = agg_exprIsNull_1_0; | |
/* 262 */ double agg_value_66 = -1.0; | |
/* 263 */ if (!agg_exprIsNull_1_0) { | |
/* 264 */ final String agg_doubleStr_1 = agg_expr_1_0.toString(); | |
/* 265 */ try { | |
/* 266 */ agg_value_66 = Double.valueOf(agg_doubleStr_1); | |
/* 267 */ } catch (java.lang.NumberFormatException e) { | |
/* 268 */ final Double d = (Double) Cast.processFloatingPointSpecialLiterals(agg_doubleStr_1, false); | |
/* 269 */ if (d == null) { | |
/* 270 */ agg_isNull_66 = true; | |
/* 271 */ } else { | |
/* 272 */ agg_value_66 = d.doubleValue(); | |
/* 273 */ } | |
/* 274 */ } | |
/* 275 */ } | |
/* 276 */ if (!agg_isNull_66) { | |
/* 277 */ agg_isNull_65 = false; // resultCode could change nullability. | |
/* 278 */ | |
/* 279 */ agg_value_65 = agg_value_66 * 1.0D; | |
/* 280 */ | |
/* 281 */ } | |
/* 282 */ if (!agg_isNull_65) { | |
/* 283 */ agg_isNull_61 = false; // resultCode could change nullability. | |
/* 284 */ | |
/* 285 */ agg_value_61 = agg_value_62 + agg_value_65; | |
/* 286 */ | |
/* 287 */ } | |
/* 288 */ if (!agg_isNull_61) { | |
/* 289 */ agg_agg_isNull_60_0 = false; | |
/* 290 */ agg_value_60 = agg_value_61; | |
/* 291 */ continue; | |
/* 292 */ } | |
/* 293 */ | |
/* 294 */ if (!agg_bufIsNull_6) { | |
/* 295 */ agg_agg_isNull_60_0 = false; | |
/* 296 */ agg_value_60 = agg_bufValue_6; | |
/* 297 */ continue; | |
/* 298 */ } | |
/* 299 */ | |
/* 300 */ } while (false); | |
/* 301 */ // update aggregation buffers | |
/* 302 */ agg_bufIsNull_6 = agg_agg_isNull_60_0; | |
/* 303 */ agg_bufValue_6 = agg_value_60; | |
/* 304 */ } | |
/* 305 */ | |
/* 306 */ private void agg_doConsume_1(long agg_expr_0_1, boolean agg_exprIsNull_0_1, double agg_expr_1_1, boolean agg_exprIsNull_1_1, double agg_expr_2_1, boolean agg_exprIsNull_2_1, double agg_expr_3_0, boolean agg_exprIsNull_3_0) throws java.io.IOException { | |
/* 307 */ // do aggregate | |
/* 308 */ // common sub-expressions | |
/* 309 */ | |
/* 310 */ // evaluate aggregate functions and update aggregation buffers | |
/* 311 */ agg_doAggregate_sum_4(agg_exprIsNull_0_1, agg_expr_0_1); | |
/* 312 */ agg_doAggregate_sum_5(agg_expr_1_1, agg_exprIsNull_1_1); | |
/* 313 */ agg_doAggregate_sum_6(agg_expr_2_1, agg_exprIsNull_2_1); | |
/* 314 */ agg_doAggregate_sum_7(agg_exprIsNull_3_0, agg_expr_3_0); | |
/* 315 */ | |
/* 316 */ } | |
/* 317 */ | |
/* 318 */ private void project_doConsume_0(InternalRow inputadapter_row_0, UTF8String project_expr_0_0, boolean project_exprIsNull_0_0) throws java.io.IOException { | |
/* 319 */ agg_doConsume_0(project_expr_0_0, project_exprIsNull_0_0, project_expr_0_0, project_exprIsNull_0_0, project_expr_0_0, project_exprIsNull_0_0); | |
/* 320 */ | |
/* 321 */ } | |
/* 322 */ | |
/* 323 */ private void agg_doAggregate_sum_5(double agg_expr_1_1, boolean agg_exprIsNull_1_1) throws java.io.IOException { | |
/* 324 */ // do aggregate for sum | |
/* 325 */ // evaluate aggregate function | |
/* 326 */ agg_agg_isNull_95_0 = true; | |
/* 327 */ double agg_value_95 = -1.0; | |
/* 328 */ do { | |
/* 329 */ boolean agg_isNull_96 = true; | |
/* 330 */ double agg_value_96 = -1.0; | |
/* 331 */ agg_agg_isNull_97_0 = true; | |
/* 332 */ double agg_value_97 = -1.0; | |
/* 333 */ do { | |
/* 334 */ if (!agg_bufIsNull_1) { | |
/* 335 */ agg_agg_isNull_97_0 = false; | |
/* 336 */ agg_value_97 = agg_bufValue_1; | |
/* 337 */ continue; | |
/* 338 */ } | |
/* 339 */ | |
/* 340 */ if (!false) { | |
/* 341 */ agg_agg_isNull_97_0 = false; | |
/* 342 */ agg_value_97 = 0.0D; | |
/* 343 */ continue; | |
/* 344 */ } | |
/* 345 */ | |
/* 346 */ } while (false); | |
/* 347 */ | |
/* 348 */ if (!agg_exprIsNull_1_1) { | |
/* 349 */ agg_isNull_96 = false; // resultCode could change nullability. | |
/* 350 */ | |
/* 351 */ agg_value_96 = agg_value_97 + agg_expr_1_1; | |
/* 352 */ | |
/* 353 */ } | |
/* 354 */ if (!agg_isNull_96) { | |
/* 355 */ agg_agg_isNull_95_0 = false; | |
/* 356 */ agg_value_95 = agg_value_96; | |
/* 357 */ continue; | |
/* 358 */ } | |
/* 359 */ | |
/* 360 */ if (!agg_bufIsNull_1) { | |
/* 361 */ agg_agg_isNull_95_0 = false; | |
/* 362 */ agg_value_95 = agg_bufValue_1; | |
/* 363 */ continue; | |
/* 364 */ } | |
/* 365 */ | |
/* 366 */ } while (false); | |
/* 367 */ // update aggregation buffers | |
/* 368 */ agg_bufIsNull_1 = agg_agg_isNull_95_0; | |
/* 369 */ agg_bufValue_1 = agg_value_95; | |
/* 370 */ } | |
/* 371 */ | |
/* 372 */ private void agg_doAggregate_sum_1(boolean agg_e[error] ! certain inputs.aggregation - no group-by - aggregtion functions only | |
xprIsNull_0_0, org.apache.spark.unsafe.types.UTF8String agg_expr_0_0) throws java.io.IOException { | |
/* 373 */ // do aggregate for sum | |
/* 374 */ // evaluate aggregate function | |
/* 375 */ agg_agg_isNull_50_0 = true; | |
/* 376 */ double agg_value_50 = -1.0; | |
/* 377 */ do { | |
/* 378 */ boolean agg_isNull_51 = true; | |
/* 379 */ double agg_value_51 = -1.0; | |
/* 380 */ agg_agg_isNull_52_0 = true; | |
/* 381 */ double agg_value_52 = -1.0; | |
/* 382 */ do { | |
/* 383 */ if (!agg_bufIsNull_5) { | |
/* 384 */ agg_agg_isNull_52_0 = false; | |
/* 385 */ agg_value_52 = agg_bufValue_5; | |
/* 386 */ continue; | |
/* 387 */ } | |
/* 388 */ | |
/* 389 */ if (!false) { | |
/* 390 */ agg_agg_isNull_52_0 = false; | |
/* 391 */ agg_value_52 = 0.0D; | |
/* 392 */ continue; | |
/* 393 */ } | |
/* 394 */ | |
/* 395 */ } while (false); | |
/* 396 */ boolean agg_isNull_55 = true; | |
/* 397 */ double agg_value_55 = -1.0; | |
/* 398 */ boolean agg_isNull_56 = agg_exprIsNull_0_0; | |
/* 399 */ double agg_value_56 = -1.0; | |
/* 400 */ if (!agg_exprIsNull_0_0) { | |
/* 401 */ final String agg_doubleStr_0 = agg_expr_0_0.toString(); | |
/* 402 */ try { | |
/* 403 */ agg_value_56 = Double.valueOf(agg_doubleStr_0); | |
/* 404 */ } catch (java.lang.NumberFormatException e) { | |
/* 405 */ final Double d = (Double) Cast.processFloatingPointSpecialLiterals(agg_doubleStr_0, false); | |
/* 406 */ if (d == null) { | |
/* 407 */ agg_isNull_56 = true; | |
/* 408 */ } else { | |
/* 409 */ agg_value_56 = d.doubleValue(); | |
/* 410 */ } | |
/* 411 */ } | |
/* 412 */ } | |
/* 413 */ if (!agg_isNull_56) { | |
/* 414 */ agg_isNull_55 = false; // resultCode could change nullability. | |
/* 415 */ | |
/* 416 */ agg_value_55 = agg_value_56 * 1.0D; | |
/* 417 */ | |
/* 418 */ } | |
/* 419 */ if (!agg_isNull_55) { | |
/* 420 */ agg_isNull_51 = false; // resultCode could change nullability. | |
/* 421 */ | |
/* 422 */ agg_value_51 = agg_value_52 + agg_value_55; | |
/* 423 */ | |
/* 424 */ } | |
/* 425 */ if (!agg_isNull_51) { | |
/* 426 */ agg_agg_isNull_50_0 = false; | |
/* 427 */ agg_value_50 = agg_value_51; | |
/* 428 */ continue; | |
/* 429 */ } | |
/* 430 */ | |
/* 431 */ if (!agg_bufIsNull_5) { | |
/* 432 */ agg_agg_isNull_50_0 = false; | |
/* 433 */ agg_value_50 = agg_bufValue_5; | |
/* 434 */ continue; | |
/* 435 */ } | |
/* 436 */ | |
/* 437 */ } while (false); | |
/* 438 */ // update aggregation buffers | |
/* 439 */ agg_bufIsNull_5 = agg_agg_isNull_50_0; | |
/* 440 */ agg_bufValue_5 = agg_value_50; | |
/* 441 */ } | |
/* 442 */ | |
/* 443 */ private void agg_doConsume_0(UTF8String agg_expr_0_0, boolean agg_exprIsNull_0_0, UTF8String agg_expr_1_0, boolean agg_exprIsNull_1_0, UTF8String agg_expr_2_0, boolean agg_exprIsNull_2_0) throws java.io.IOException { | |
/* 444 */ // do aggregate | |
/* 445 */ // common sub-expressions | |
/* 446 */ | |
/* 447 */ // evaluate aggregate functions and update aggregation buffers | |
/* 448 */ agg_doAggregate_sum_0(); | |
/* 449 */ agg_doAggregate_sum_1(agg_exprIsNull_0_0, agg_expr_0_0); | |
/* 450 */ agg_doAggregate_sum_2(agg_exprIsNull_1_0, agg_expr_1_0); | |
/* 451 */ agg_doAggregate_sum_3(agg_exprIsNull_2_0, agg_expr_2_0); | |
/* 452 */ | |
/* 453 */ } | |
/* 454 */ | |
/* 455 */ private void agg_doAggregate_sum_4(boolean agg_exprIsNull_0_1, long agg_expr_0_1) throws java.io.IOException { | |
/* 456 */ // do aggregate for sum | |
/* 457 */ // evaluate aggregate function | |
/* 458 */ agg_agg_isNull_88_0 = true; | |
/* 459 */ long agg_value_88 = -1L; | |
/* 460 */ do { | |
/* 461 */ boolean agg_isNull_89 = true; | |
/* 462 */ long agg_value_89 = -1L; | |
/* 463 */ agg_agg_isNull_90_0 = true; | |
/* 464 */ long agg_value_90 = -1L; | |
/* 465 */ do { | |
/* 466 */ if (!agg_bufIsNull_0) { | |
/* 467 */ agg_agg_isNull_90_0 = false; | |
/* 468 */ agg_value_90 = agg_bufValue_0; | |
/* 469 */ continue; | |
/* 470 */ } | |
/* 471 */ | |
/* 472 */ if (!false) { | |
/* 473 */ agg_agg_isNull_90_0 = false; | |
/* 474 */ agg_value_90 = 0L; | |
/* 475 */ continue; | |
/* 476 */ } | |
/* 477 */ | |
/* 478 */ } while (false); | |
/* 479 */ | |
/* 480 */ if (!agg_exprIsNull_0_1) { | |
/* 481 */ agg_isNull_89 = false; // resultCode could change nullability. | |
/* 482 */ | |
/* 483 */ agg_value_89 = agg_value_90 + agg_expr_0_1; | |
/* 484 */ | |
/* 485 */ } | |
/* 486 */ if (!agg_isNull_89) { | |
/* 487 */ agg_agg_isNull_88_0 = false; | |
/* 488 */ agg_value_88 = agg_value_89; | |
/* 489 */ continue; | |
/* 490 */ } | |
/* 491 */ | |
/* 492 */ if (!agg_bufIsNull_0) { | |
/* 493 */ agg_agg_isNull_88_0 = false; | |
/* 494 */ agg_value_88 = agg_bufValue_0; | |
/* 495 */ continue; | |
/* 496 */ } | |
/* 497 */ | |
/* 498 */ } while (false); | |
/* 499 */ // update aggregation buffers | |
/* 500 */ agg_bufIsNull_0 = agg_agg_isNull_88_0; | |
/* 501 */ agg_bufValue_0 = agg_value_88; | |
/* 502 */ } | |
/* 503 */ | |
/* 504 */ private void agg_doAggregateWithoutKey_1() throws java.io.IOException { | |
/* 505 */ // initialize aggregation buffer | |
/* 506 */ agg_bufIsNull_4 = true; | |
/* 507 */ agg_bufValue_4 = -1L; | |
/* 508 */ agg_bufIsNull_5 = true; | |
/* 509 */ agg_bufValue_5 = -1.0; | |
/* 510 */ agg_bufIsNull_6 = true; | |
/* 511 */ agg_bufValue_6 = -1.0; | |
/* 512 */ agg_bufIsNull_7 = true; | |
/* 513 */ agg_bufValue_7 = -1.0; | |
/* 514 */ | |
/* 515 */ while ( inputadapter_input_0.hasNext()) { | |
/* 516 */ InternalRow inputadapter_row_0 = (InternalRow) inputadapter_input_0.next(); | |
/* 517 */ | |
/* 518 */ boolean inputadapter_isNull_0 = inputadapter_row_0.isNullAt(0); | |
/* 519 */ UTF8String inputadapter_value_0 = inputadapter_isNull_0 ? | |
/* 520 */ null : (inputadapter_row_0.getUTF8String(0)); | |
/* 521 */ | |
/* 522 */ project_doConsume_0(inputadapter_row_0, inputadapter_value_0, inputadapter_isNull_0); | |
/* 523 */ // shouldStop check is eliminated | |
/* 524 */ } | |
/* 525 */ | |
/* 526 */ } | |
/* 527 */ | |
/* 528 */ private void agg_doAggregate_sum_7(boolean agg_exprIsNull_3_0, double agg_expr_3_0) throws java.io.IOException { | |
/* 529 */ // do aggregate for sum | |
/* 530 */ // evaluate aggregate function | |
/* 531 */ agg_agg_isNull_109_0 = true; | |
/* 532 */ double agg_value_109 = -1.0; | |
/* 533 */ do { | |
/* 534 */ boolean agg_isNull_110 = true; | |
/* 535 */ double agg_value_110 = -1.0; | |
/* 536 */ agg_agg_isNull_111_0 = true; | |
/* 537 */ double agg_value_111 = -1.0; | |
/* 538 */ do { | |
/* 539 */ if (!agg_bufIsNull_3) { | |
/* 540 */ agg_agg_isNull_111_0 = false; | |
/* 541 */ agg_value_111 = agg_bufValue_3; | |
/* 542 */ continue; | |
/* 543 */ } | |
/* 544 */ | |
/* 545 */ if (!false) { | |
/* 546 */ agg_agg_isNull_111_0 = false; | |
/* 547 */ agg_value_111 = 0.0D; | |
/* 548 */ continue; | |
/* 549 */ } | |
/* 550 */ | |
/* 551 */ } while (false); | |
/* 552 */ | |
/* 553 */ if (!agg_exprIsNull_3_0) { | |
/* 554 */ agg_isNull_110 = false; // resultCode could change nullability. | |
/* 555 */ | |
/* 556 */ agg_value_110 = agg_value_111 + agg_expr_3_0; | |
/* 557 */ | |
/* 558 */ } | |
/* 559 */ if (!agg_isNull_110) { | |
/* 560 */ agg_agg_isNull_109_0 = false; | |
/* 561 */ agg_value_109 = agg_value_110; | |
/* 562 */ continue; | |
/* 563 */ } | |
/* 564 */ | |
/* 565 */ if (!agg_bufIsNull_3) { | |
/* 566 */ agg_agg_isNull_109_0 = false; | |
/* 567 */ agg_value_109 = agg_bufValue_3; | |
/* 568 */ continue; | |
/* 569 */ } | |
/* 570 */ | |
/* 571 */ } while (false); | |
/* 572 */ // update aggregation buffers | |
/* 573 */ agg_bufIsNull_3 = agg_agg_isNull_109_0; | |
/* 574 */ agg_bufValue_3 = agg_value_109; | |
/* 575 */ } | |
/* 576 */ | |
/* 577 */ protected void processNext() throws java.io.IOException { | |
/* 578 */ while (!agg_initAgg_0) { | |
/* 579 */ agg_initAgg_0 = true; | |
/* 580 */ long agg_beforeAgg_1 = System.nanoTime(); | |
/* 581 */ agg_doAggregateWithoutKey_0(); | |
/* 582 */ ((org.apache.spark.sql.executio[error] org.apache.spark.SparkException: Job aborted due to stage failure: Task 5 in stage 39.0 failed 1 times, most recent failure: Lost task 5.0 in stage 39.0 (TID 168, 192.168.0.7, executor driver): java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, X), StringType), true, false) AS X#1129 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, __CAVEATS_ROW_LB), StringType), true, false) AS __CAVEATS_ROW_LB#1130 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, __CAVEATS_ROW_BG), StringType), true, false) AS __CAVEATS_ROW_BG#1131 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 3, __CAVEATS_ROW_UB), StringType), true, false) AS __CAVEATS_ROW_UB#1132 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 4, __CAVEATS_X_LB), StringType), true, false) AS __CAVEATS_X_LB#1133 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 5, __CAVEATS_X_UB), StringType), true, false) AS __CAVEATS_X_UB#1134 | |
[error] at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:344) | |
[error] at org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
[error] at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) | |
[error] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
[error] at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
[error] at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
[error] at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
[error] at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
[error] at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
[error] at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
[error] at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
[error] at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
[error] at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
[error] at org.apache.spark.scheduler.Task.run(Task.scala:127) | |
[error] at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
[error] at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
[error] at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
[error] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) | |
[error] at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) | |
[error] at java.base/java.lang.Thread.run(Thread.java:830) | |
[error] Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 | |
[error] at org.apache.spark.sql.catalyst.expressions.GenericRow.get(rows.scala:174) | |
[error] at org.apache.spark.sql.Row.isNullAt(Row.scala:204) | |
[error] at org.apache.spark.sql.Row.isNullAt$(Row.scala:204) | |
[error] at org.apache.spark.sql.catalyst.expressions.GenericRow.isNullAt(rows.scala:166) | |
[error] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_1$(Unknown Source) | |
[error] at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) | |
[error] at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:340) | |
[error] ... 19 more | |
[error] | |
[error] Driver stacktrace: (DAGScheduler.scala:1989) | |
n.metric.SQLMetric) references[3] /* aggTime */).add((System.nanoTime() - agg_beforeAgg_1) / 1000000); | |
/* 583 */ | |
/* 584 */ // output the result | |
/* 585 */ boolean agg_isNull_9 = true; | |
/* 586 */ boolean agg_value_9 = false; | |
/* 587 */ | |
/* 588 */ if (!agg_bufIsNull_0) { | |
/* 589 */ agg_isNull_9 = false; // resultCode could change nullability. | |
/* 590 */ agg_value_9 = agg_bufValue_0 == 0L; | |
/* 591 */ | |
/* 592 */ } | |
/* 593 */ boolean agg_isNull_8 = false; | |
/* 594 */ double agg_value_8 = -1.0; | |
/* 595 */ if (!agg_isNull_9 && agg_value_9) { | |
/* 596 */ agg_isNull_8 = false; | |
/* 597 */ agg_value_8 = 0.0D; | |
/* 598 */ } else { | |
/* 599 */ boolean agg_isNull_15 = agg_bufIsNull_0; | |
/* 600 */ double agg_value_15 = -1.0; | |
/* 601 */ if (!agg_bufIsNull_0) { | |
/* 602 */ agg_value_15 = (double) agg_bufValue_0; | |
/* 603 */ } | |
/* 604 */ boolean agg_isNull_13 = false; | |
/* 605 */ double agg_value_13 = -1.0; | |
/* 606 */ if (agg_isNull_15 || agg_value_15 == 0) { | |
/* 607 */ agg_isNull_13 = true; | |
/* 608 */ } else { | |
/* 609 */ if (agg_bufIsNull_0) { | |
/* 610 */ agg_isNull_13 = true; | |
/* 611 */ } else { | |
/* 612 */ agg_value_13 = (double)(agg_bufValue_0 / agg_value_15); | |
/* 613 */ } | |
/* 614 */ } | |
/* 615 */ agg_isNull_8 = agg_isNull_13; | |
/* 616 */ agg_value_8 = agg_value_13; | |
/* 617 */ } | |
/* 618 */ boolean agg_isNull_21 = true; | |
/* 619 */ boolean agg_value_21 = false; | |
/* 620 */ | |
/* 621 */ if (!agg_bufIsNull_0) { | |
/* 622 */ agg_isNull_21 = false; // resultCode could change nullability. | |
/* 623 */ agg_value_21 = agg_bufValue_0 == 0L; | |
/* 624 */ | |
/* 625 */ } | |
/* 626 */ boolean agg_isNull_20 = false; | |
/* 627 */ double agg_value_20 = -1.0; | |
/* 628 */ if (!agg_isNull_21 && agg_value_21) { | |
/* 629 */ agg_isNull_20 = false; | |
/* 630 */ agg_value_20 = 0.0D; | |
/* 631 */ } else { | |
/* 632 */ boolean agg_isNull_27 = agg_bufIsNull_0; | |
/* 633 */ double agg_value_27 = -1.0; | |
/* 634 */ if (!agg_bufIsNull_0) { | |
/* 635 */ agg_value_27 = (double) agg_bufValue_0; | |
/* 636 */ } | |
/* 637 */ boolean agg_isNull_25 = false; | |
/* 638 */ double agg_value_25 = -1.0; | |
/* 639 */ if (agg_isNull_27 || agg_value_27 == 0) { | |
/* 640 */ agg_isNull_25 = true; | |
/* 641 */ } else { | |
/* 642 */ if (agg_bufIsNull_0) { | |
/* 643 */ agg_isNull_25 = true; | |
/* 644 */ } else { | |
/* 645 */ agg_value_25 = (double)(agg_bufValue_0 / agg_value_27); | |
/* 646 */ } | |
/* 647 */ } | |
/* 648 */ agg_isNull_20 = agg_isNull_25; | |
/* 649 */ agg_value_20 = agg_value_25; | |
/* 650 */ } | |
/* 651 */ boolean agg_isNull_30 = true; | |
/* 652 */ boolean agg_value_30 = false; | |
/* 653 */ | |
/* 654 */ if (!agg_bufIsNull_0) { | |
/* 655 */ agg_isNull_30 = false; // resultCode could change nullability. | |
/* 656 */ agg_value_30 = agg_bufValue_0 == 0L; | |
/* 657 */ | |
/* 658 */ } | |
/* 659 */ boolean agg_isNull_29 = false; | |
/* 660 */ double agg_value_29 = -1.0; | |
/* 661 */ if (!agg_isNull_30 && agg_value_30) { | |
/* 662 */ agg_isNull_29 = false; | |
/* 663 */ agg_value_29 = 0.0D; | |
/* 664 */ } else { | |
/* 665 */ boolean agg_isNull_36 = agg_bufIsNull_0; | |
/* 666 */ double agg_value_36 = -1.0; | |
/* 667 */ if (!agg_bufIsNull_0) { | |
/* 668 */ agg_value_36 = (double) agg_bufValue_0; | |
/* 669 */ } | |
/* 670 */ boolean agg_isNull_34 = false; | |
/* 671 */ double agg_value_34 = -1.0; | |
/* 672 */ if (agg_isNull_36 || agg_value_36 == 0) { | |
/* 673 */ agg_isNull_34 = true; | |
/* 674 */ } else { | |
/* 675 */ if (agg_bufIsNull_0) { | |
/* 676 */ agg_isNull_34 = true; | |
/* 677 */ } else { | |
/* 678 */ agg_value_34 = (double)(agg_bufValue_0 / agg_value_36); | |
/* 679 */ } | |
/* 680 */ } | |
/* 681 */ agg_isNull_29 = agg_isNull_34; | |
/* 682 */ agg_value_29 = agg_value_34; | |
/* 683 */ } | |
/* 684 */ | |
/* 685 */ ((org.apache.spark.sql.execution.metric.SQLMetric) references[2] /* numOutputRows */).add(1); | |
/* 686 */ project_mutableStateArray_0[4].reset(); | |
/* 687 */ | |
/* 688 */ project_mutableStateArray_0[4].zeroOutNullBytes(); | |
/* 689 */ | |
/* 690 */ if (agg_isNull_8) { | |
/* 691 */ project_mutableStateArray_0[4].setNullAt(0); | |
/* 692 */ } else { | |
/* 693 */ project_mutableStateArray_0[4].write(0, agg_value_8); | |
/* 694 */ } | |
/* 695 */ | |
/* 696 */ project_mutableStateArray_0[4].write(1, 1); | |
/* 697 */ | |
/* 698 */ project_mutableStateArray_0[4].write(2, 1); | |
/* 699 */ | |
/* 700 */ project_mutableStateArray_0[4].write(3, 1); | |
/* 701 */ | |
/* 702 */ if (agg_isNull_20) { | |
/* 703 */ project_mutableStateArray_0[4].setNullAt(4); | |
/* 704 */ } else { | |
/* 705 */ project_mutableStateArray_0[4].write(4, agg_value_20); | |
/* 706 */ } | |
/* 707 */ | |
/* 708 */ if (agg_isNull_29) { | |
/* 709 */ project_mutableStateArray_0[4].setNullAt(5); | |
/* 710 */ } else { | |
/* 711 */ project_mutableStateArray_0[4].write(5, agg_value_29); | |
/* 712 */ } | |
/* 713 */ append((project_mutableStateArray_0[4].getRow())); | |
/* 714 */ } | |
/* 715 */ } | |
/* 716 */ | |
/* 717 */ } | |
15:17:36.185 [Executor task launch worker for task 168] ERROR org.apache.spark.executor.Executor - Exception in task 5.0 in stage 39.0 (TID 168) | |
java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, X), StringType), true, false) AS X#1129 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, __CAVEATS_ROW_LB), StringType), true, false) AS __CAVEATS_ROW_LB#1130 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, __CAVEATS_ROW_BG), StringType), true, false) AS __CAVEATS_ROW_BG#1131 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 3, __CAVEATS_ROW_UB), StringType), true, false) AS __CAVEATS_ROW_UB#1132 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 4, __CAVEATS_X_LB), StringType), true, false) AS __CAVEATS_X_LB#1133 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 5, __CAVEATS_X_UB), StringType), true, false) AS __CAVEATS_X_UB#1134 | |
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:344) | |
at org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) | |
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
at org.a[error] org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:1989) | |
pache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
at org.apache.spark.scheduler.Task.run(Task.scala:127) | |
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) | |
at java.base/java.lang.Thread.run(Thread.java:830) | |
Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 | |
at org.apache.spark.sql.catalyst.expressions.GenericRow.get(rows.scala:174) | |
at org.apache.spark.sql.Row.isNullAt(Row.scala:204) | |
at org.apache.spark.sql.Row.isNullAt$(Row.scala:204) | |
at org.apache.spark.sql.catalyst.expressions.GenericRow.isNullAt(rows.scala:166) | |
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_1$(Unknown Source) | |
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) | |
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:340) | |
... 19 common frames omitted | |
15:17:36.185 [Executor task launch worker for task 174] ERROR org.apache.spark.executor.Executor - Exception in task 11.0 in stage 39.0 (TID 174) | |
java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: Index 5 out of bounds for length 5 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, X), StringType), true, false) AS X#1129 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, __CAVEATS_ROW_LB), StringType), true, false) AS __CAVEATS_ROW_LB#1130 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, __CAVEATS_ROW_BG), StringType), true, false) AS __CAVEATS_ROW_BG#1131 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 3, __CAVEATS_ROW_UB), StringType), true, false) AS __CAVEATS_ROW_UB#1132 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 4, __CAVEATS_X_LB), StringType), true, false) AS __CAVEATS_X_LB#1133 | |
if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 5, __CAVEATS_X_UB), StringType), true, false) AS __CAVEATS_X_UB#1134 | |
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:344) | |
at org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
at scala.collection.Iterator$$anon$10.next(Iterator.scala:459) | |
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
at org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
at org.apache.spark.scheduler.Task.run(Task.scala:127) | |
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) | |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) | |
at java.base/java.lang.Thread.run(Thread.java:830) | |
Caused by: java.lang.ArrayIndexOutOfBoundsException: Index 5 out of bounds for length 5 | |
at org.apache.spark.sql.catalyst.expressions.GenericRow.get(rows.scala:174) | |
at org.apache.spark.sql.Row.isNullAt(Row.scala:204) | |
at org.apache.spark.sql.Row.isNullAt$(Row.scala:204) | |
at org.apache.spark.sql.catalyst.expressions.GenericRow.isNullAt(rows.scala:166) | |
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_5$(Unknown Source) | |
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) | |
at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:340) | |
... 19 common frames omitted | |
15:17:36.217 [task-result-getter-2] ERROR o.a.spark.scheduler.TaskSetManager - Task 5 in stage 39.0 failed 1 times; aborting job | |
[error] org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:1977) | |
[error] org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:1976) | |
[error] org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1976) | |
[error] org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:956) | |
[error] org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:956) | |
[error] org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:956) | |
[error] org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2206) | |
[error] org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2155) | |
[error] org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2144) | |
[error] org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49) | |
[error] org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:758) | |
[error] org.apache.spark.SparkContext.runJob(SparkContext.scala:2116) | |
[error] org.apache.spark.SparkContext.runJob(SparkContext.scala:2137) | |
[error] org.apache.spark.SparkContext.runJob(SparkContext.scala:2156) | |
[error] org.apache.spark.SparkContext.runJob(SparkContext.scala:2181) | |
[error] org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1004) | |
[error] org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
[error] org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) | |
[error] org.apache.spark.rdd.RDD.withScope(RDD.scala:388) | |
[error] org.apache.spark.rdd.RDD.collect(RDD.scala:1003) | |
[error] org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:365) | |
[error] org.apache.spark.sql.Dataset.collectFromPlan(Dataset.scala:3482) | |
[error] org.apache.spark.sql.Dataset.$anonfun$collect$1(Dataset.scala:2812) | |
[error] org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3472) | |
[error] org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$4(SQLExecution.scala:100) | |
[error] org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:160) | |
[error] org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:87) | |
[error] org.apache.spark.sql.Dataset.withAction(Dataset.scala:3468) | |
[error] org.apache.spark.sql.Dataset.collect(Dataset.scala:2812) | |
[error] org.mimirdb.utility.Bag$.apply(Bag.scala:41) | |
[error] org.mimirdb.test.DataFrameMatchers.dfBagEquals(DataFrameMatchers.scala:14) | |
[error] org.mimirdb.test.DataFrameMatchers.dfBagEquals$(DataFrameMatchers.scala:12) | |
[error] org.mimirdb.caveats.LogicalPlanRangeSpec.dfBagEquals(LogicalPlanRangeSpec.scala:22) | |
[error] org.mimirdb.test.DataFrameMatchers.$anonfun$beBagEqualsTo$2(DataFrameMatchers.scala:48) | |
[error] org.mimirdb.caveats.LogicalPlanRangeSpec.$anonfun$annotBagEqualToDF$1(LogicalPlanRangeSpec.scala:81) | |
[error] org.mimirdb.caveats.LogicalPlanRangeSpec.annotBagEqualToDF(LogicalPlanRangeSpec.scala:81) | |
[error] org.mimirdb.caveats.LogicalPlanRangeSpec.$anonfun$new$8(LogicalPlanRangeSpec.scala:368) | |
[error] org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:344) | |
[error] org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
[error] org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
[error] org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
[error] org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
[error] org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
[error] org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
[error] org.apache.spark.scheduler.Task.run(Task.scala:127) | |
[error] org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
[error] org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
[error] org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
[error] org.apache.spark.sql.catalyst.expressions.GenericRow.get(rows.scala:174) | |
[error] org.apache.spark.sql.Row.isNullAt(Row.scala:204) | |
[error] org.apache.spark.sql.Row.isNullAt$(Row.scala:204) | |
[error] org.apache.spark.sql.catalyst.expressions.GenericRow.isNullAt(rows.scala:166) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_1$(Unknown Source) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) | |
[error] org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:340) | |
[error] org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
[error] org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
[error] org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
[error] org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
[error] org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
[error] org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
[error] org.apache.spark.scheduler.Task.run(Task.scala:127) | |
[error] org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
[error] org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
[error] org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
[error] CAUSED BY | |
[error] java.lang.RuntimeException: Error while encoding: java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 0, X), StringType), true, false) AS X#1129 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 1, __CAVEATS_ROW_LB), StringType), true, false) AS __CAVEATS_ROW_LB#1130 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 2, __CAVEATS_ROW_BG), StringType), true, false) AS __CAVEATS_ROW_BG#1131 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 3, __CAVEATS_ROW_UB), StringType), true, false) AS __CAVEATS_ROW_UB#1132 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 4, __CAVEATS_X_LB), StringType), true, false) AS __CAVEATS_X_LB#1133 | |
[error] if (assertnotnull(input[0, org.apache.spark.sql.Row, true]).isNullAt) null else staticinvoke(class org.apache.spark.unsafe.types.UTF8String, StringType, fromString, validateexternaltype(getexternalrowfield(assertnotnull(input[0, org.apache.spark.sql.Row, true]), 5, __CAVEATS_X_UB), StringType), true, false) AS __CAVEATS_X_UB#1134 (ExpressionEncoder.scala:344) | |
[error] org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:344) | |
[error] org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
[error] org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
[error] org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
[error] org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
[error] org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
[error] org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
[error] org.apache.spark.scheduler.Task.run(Task.scala:127) | |
[error] org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
[error] org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
[error] org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
[error] org.apache.spark.sql.catalyst.expressions.GenericRow.get(rows.scala:174) | |
[error] org.apache.spark.sql.Row.isNullAt(Row.scala:204) | |
[error] org.apache.spark.sql.Row.isNullAt$(Row.scala:204) | |
[error] org.apache.spark.sql.catalyst.expressions.GenericRow.isNullAt(rows.scala:166) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_1$(Unknown Source) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) | |
[error] org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:340) | |
[error] org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
[error] org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
[error] org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
[error] org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
[error] org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
[error] org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
[error] org.apache.spark.scheduler.Task.run(Task.scala:127) | |
[error] org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
[error] org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
[error] org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
[error] CAUSED BY | |
[error] java.lang.ArrayIndexOutOfBoundsException: Index 1 out of bounds for length 1 (rows.scala:174) | |
[error] org.apache.spark.sql.catalyst.expressions.GenericRow.get(rows.scala:174) | |
[error] org.apache.spark.sql.Row.isNullAt(Row.scala:204) | |
[error] org.apache.spark.sql.Row.isNullAt$(Row.scala:204) | |
[error] org.apache.spark.sql.catalyst.expressions.GenericRow.isNullAt(rows.scala:166) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.If_1$(Unknown Source) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(Unknown Source) | |
[error] org.apache.spark.sql.catalyst.encoders.ExpressionEncoder.toRow(ExpressionEncoder.scala:340) | |
[error] org.apache.spark.sql.SparkSession.$anonfun$createDataFrame$1(SparkSession.scala:350) | |
[error] org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source) | |
[error] org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) | |
[error] org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:726) | |
[error] org.apache.spark.sql.execution.SparkPlan.$anonfun$getByteArrayRdd$1(SparkPlan.scala:321) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:872) | |
[error] org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:872) | |
[error] org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) | |
[error] org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:349) | |
[error] org.apache.spark.rdd.RDD.iterator(RDD.scala:313) | |
[error] org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90) | |
[error] org.apache.spark.scheduler.Task.run(Task.scala:127) | |
[error] org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:441) | |
[error] org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1377) | |
[error] org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:444) | |
[info] TIP inputs | |
[info] Caveated inputs | |
[info] Total for specification LogicalPlanRangeSpec | |
[info] Finished in 8 seconds, 31 ms | |
[info] 1 example, 0 failure, 1 error | |
[error] Error: Total 1, Failed 0, Errors 1, Passed 0 | |
[error] Error during tests: | |
[error] org.mimirdb.caveats.LogicalPlanRangeSpec | |
[error] (Test / testOnly) sbt.TestsFailedException: Tests unsuccessful | |
[error] Total time: 17 s, completed Jun 11, 2020, 3:17:44 PM | |
sbt:mimir-caveats> |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment