Created
August 9, 2015 03:18
-
-
Save ssimeonov/636a25d6074a03aafa67 to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
➜ dev spark-1.4.1-bin-hadoop2.6/bin/spark-sql --packages "com.databricks:spark-csv_2.10:1.0.3,com.lihaoyi:pprint_2.10:0.3.4" --driver-memory 4g --conf "spark.driver.extraJavaOptions=-XX:MaxPermSize=512m" --conf "spark.local.dir=/Users/sim/tmp" --conf spark.hadoop.fs.s3n.impl=org.apache.hadoop.fs.s3native.NativeS3FileSystem | |
Ivy Default Cache set to: /Users/sim/.ivy2/cache | |
The jars for the packages stored in: /Users/sim/.ivy2/jars | |
:: loading settings :: url = jar:file:/Users/sim/dev/spark-1.4.1-bin-hadoop2.6/lib/spark-assembly-1.4.1-hadoop2.6.0.jar!/org/apache/ivy/core/settings/ivysettings.xml | |
com.databricks#spark-csv_2.10 added as a dependency | |
com.lihaoyi#pprint_2.10 added as a dependency | |
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 | |
confs: [default] | |
found com.databricks#spark-csv_2.10;1.0.3 in central | |
found org.apache.commons#commons-csv;1.1 in central | |
found com.lihaoyi#pprint_2.10;0.3.4 in central | |
found com.lihaoyi#derive_2.10;0.3.4 in central | |
:: resolution report :: resolve 260ms :: artifacts dl 10ms | |
:: modules in use: | |
com.databricks#spark-csv_2.10;1.0.3 from central in [default] | |
com.lihaoyi#derive_2.10;0.3.4 from central in [default] | |
com.lihaoyi#pprint_2.10;0.3.4 from central in [default] | |
org.apache.commons#commons-csv;1.1 from central in [default] | |
--------------------------------------------------------------------- | |
| | modules || artifacts | | |
| conf | number| search|dwnlded|evicted|| number|dwnlded| | |
--------------------------------------------------------------------- | |
| default | 4 | 0 | 0 | 0 || 4 | 0 | | |
--------------------------------------------------------------------- | |
:: retrieving :: org.apache.spark#spark-submit-parent | |
confs: [default] | |
0 artifacts copied, 4 already retrieved (0kB/5ms) | |
2015-08-08 23:11:44.101 java[54438:32375178] Unable to load realm info from SCDynamicStore | |
15/08/08 23:11:44 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/08/08 23:11:44 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/08/08 23:11:44 INFO metastore.ObjectStore: ObjectStore, initialize called | |
15/08/08 23:11:44 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/08/08 23:11:44 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/08/08 23:11:45 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/08/08 23:11:45 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". | |
15/08/08 23:11:46 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:46 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:46 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:46 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:47 INFO metastore.ObjectStore: Initialized ObjectStore | |
15/08/08 23:11:47 WARN metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa | |
15/08/08 23:11:47 INFO metastore.HiveMetaStore: Added admin role in metastore | |
15/08/08 23:11:47 INFO metastore.HiveMetaStore: Added public role in metastore | |
15/08/08 23:11:47 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty | |
15/08/08 23:11:47 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr. | |
15/08/08 23:11:47 INFO spark.SparkContext: Running Spark version 1.4.1 | |
15/08/08 23:11:47 WARN spark.SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN). | |
15/08/08 23:11:47 WARN spark.SparkConf: | |
SPARK_CLASSPATH was detected (set to ':/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/hadoop-aws-2.6.0.jar:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/aws-java-sdk-1.7.4.jar:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/guava-11.0.2.jar'). | |
This is deprecated in Spark 1.0+. | |
Please instead use: | |
- ./spark-submit with --driver-class-path to augment the driver classpath | |
- spark.executor.extraClassPath to augment the executor classpath | |
15/08/08 23:11:47 WARN spark.SparkConf: Setting 'spark.executor.extraClassPath' to ':/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/hadoop-aws-2.6.0.jar:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/aws-java-sdk-1.7.4.jar:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/guava-11.0.2.jar' as a work-around. | |
15/08/08 23:11:47 WARN spark.SparkConf: Setting 'spark.driver.extraClassPath' to ':/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/hadoop-aws-2.6.0.jar:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/aws-java-sdk-1.7.4.jar:/usr/local/Cellar/hadoop/2.6.0/libexec/share/hadoop/tools/lib/guava-11.0.2.jar' as a work-around. | |
15/08/08 23:11:47 INFO spark.SecurityManager: Changing view acls to: sim | |
15/08/08 23:11:47 INFO spark.SecurityManager: Changing modify acls to: sim | |
15/08/08 23:11:47 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(sim); users with modify permissions: Set(sim) | |
15/08/08 23:11:48 INFO slf4j.Slf4jLogger: Slf4jLogger started | |
15/08/08 23:11:48 INFO Remoting: Starting remoting | |
15/08/08 23:11:48 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:55604] | |
15/08/08 23:11:48 INFO util.Utils: Successfully started service 'sparkDriver' on port 55604. | |
15/08/08 23:11:48 INFO spark.SparkEnv: Registering MapOutputTracker | |
15/08/08 23:11:48 INFO spark.SparkEnv: Registering BlockManagerMaster | |
15/08/08 23:11:48 INFO storage.DiskBlockManager: Created local directory at /Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/blockmgr-6faa8961-2535-4e23-a909-c27f9482579c | |
15/08/08 23:11:48 INFO storage.MemoryStore: MemoryStore started with capacity 2.1 GB | |
15/08/08 23:11:48 INFO spark.HttpFileServer: HTTP File server directory is /Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/httpd-1039f874-ec57-4614-90cb-7761a748477b | |
15/08/08 23:11:48 INFO spark.HttpServer: Starting HTTP Server | |
15/08/08 23:11:48 INFO server.Server: jetty-8.y.z-SNAPSHOT | |
15/08/08 23:11:48 INFO server.AbstractConnector: Started [email protected]:55605 | |
15/08/08 23:11:48 INFO util.Utils: Successfully started service 'HTTP file server' on port 55605. | |
15/08/08 23:11:48 INFO spark.SparkEnv: Registering OutputCommitCoordinator | |
15/08/08 23:11:48 INFO server.Server: jetty-8.y.z-SNAPSHOT | |
15/08/08 23:11:48 INFO server.AbstractConnector: Started [email protected]:4040 | |
15/08/08 23:11:48 INFO util.Utils: Successfully started service 'SparkUI' on port 4040. | |
15/08/08 23:11:48 INFO ui.SparkUI: Started SparkUI at http://192.168.1.4:4040 | |
15/08/08 23:11:48 INFO spark.SparkContext: Added JAR file:/Users/sim/.ivy2/jars/com.databricks_spark-csv_2.10-1.0.3.jar at http://192.168.1.4:55605/jars/com.databricks_spark-csv_2.10-1.0.3.jar with timestamp 1439089908852 | |
15/08/08 23:11:48 INFO spark.SparkContext: Added JAR file:/Users/sim/.ivy2/jars/com.lihaoyi_pprint_2.10-0.3.4.jar at http://192.168.1.4:55605/jars/com.lihaoyi_pprint_2.10-0.3.4.jar with timestamp 1439089908853 | |
15/08/08 23:11:48 INFO spark.SparkContext: Added JAR file:/Users/sim/.ivy2/jars/org.apache.commons_commons-csv-1.1.jar at http://192.168.1.4:55605/jars/org.apache.commons_commons-csv-1.1.jar with timestamp 1439089908853 | |
15/08/08 23:11:48 INFO spark.SparkContext: Added JAR file:/Users/sim/.ivy2/jars/com.lihaoyi_derive_2.10-0.3.4.jar at http://192.168.1.4:55605/jars/com.lihaoyi_derive_2.10-0.3.4.jar with timestamp 1439089908853 | |
15/08/08 23:11:48 INFO executor.Executor: Starting executor ID driver on host localhost | |
15/08/08 23:11:48 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 55606. | |
15/08/08 23:11:48 INFO netty.NettyBlockTransferService: Server created on 55606 | |
15/08/08 23:11:48 INFO storage.BlockManagerMaster: Trying to register BlockManager | |
15/08/08 23:11:48 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:55606 with 2.1 GB RAM, BlockManagerId(driver, localhost, 55606) | |
15/08/08 23:11:49 INFO storage.BlockManagerMaster: Registered BlockManager | |
15/08/08 23:11:49 INFO hive.HiveContext: Initializing execution hive, version 0.13.1 | |
15/08/08 23:11:49 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes. | |
15/08/08 23:11:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/08/08 23:11:50 INFO metastore.HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/08/08 23:11:50 INFO metastore.ObjectStore: ObjectStore, initialize called | |
15/08/08 23:11:50 INFO DataNucleus.Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/08/08 23:11:50 INFO DataNucleus.Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/08/08 23:11:50 INFO metastore.ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/08/08 23:11:50 INFO metastore.MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". | |
15/08/08 23:11:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:51 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/08/08 23:11:51 INFO DataNucleus.Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | |
15/08/08 23:11:51 INFO metastore.ObjectStore: Initialized ObjectStore | |
15/08/08 23:11:51 INFO metastore.HiveMetaStore: Added admin role in metastore | |
15/08/08 23:11:51 INFO metastore.HiveMetaStore: Added public role in metastore | |
15/08/08 23:11:52 INFO metastore.HiveMetaStore: No user is added in admin role, since config is empty | |
15/08/08 23:11:52 INFO session.SessionState: No Tez session required at this point. hive.execution.engine=mr. | |
SET spark.sql.hive.version=0.13.1 | |
SET spark.sql.hive.version=0.13.1 | |
15/08/08 23:11:52 INFO metastore.HiveMetaStore: 0: get_all_databases | |
15/08/08 23:11:52 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_all_databases | |
15/08/08 23:11:52 INFO metastore.HiveMetaStore: 0: get_functions: db=default pat=* | |
15/08/08 23:11:52 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_functions: db=default pat=* | |
15/08/08 23:11:52 INFO DataNucleus.Datastore: The class "org.apache.hadoop.hive.metastore.model.MResourceUri" is tagged as "embedded-only" so does not have its own datastore table. | |
spark-sql> describe dimension_components; | |
15/08/08 23:11:58 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dimension_components | |
15/08/08 23:11:58 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_table : db=default tbl=dimension_components | |
15/08/08 23:11:59 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dimension_components | |
15/08/08 23:11:59 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_table : db=default tbl=dimension_components | |
15/08/08 23:11:59 INFO storage.MemoryStore: ensureFreeSpace(235216) called with curMem=0, maxMem=2223023063 | |
15/08/08 23:11:59 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 229.7 KB, free 2.1 GB) | |
15/08/08 23:11:59 INFO storage.MemoryStore: ensureFreeSpace(20200) called with curMem=235216, maxMem=2223023063 | |
15/08/08 23:11:59 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 19.7 KB, free 2.1 GB) | |
15/08/08 23:11:59 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost:55606 (size: 19.7 KB, free: 2.1 GB) | |
15/08/08 23:11:59 INFO spark.SparkContext: Created broadcast 0 from processCmd at CliDriver.java:423 | |
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". | |
SLF4J: Defaulting to no-operation (NOP) logger implementation | |
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. | |
15/08/08 23:11:59 INFO spark.SparkContext: Starting job: processCmd at CliDriver.java:423 | |
15/08/08 23:11:59 INFO scheduler.DAGScheduler: Got job 0 (processCmd at CliDriver.java:423) with 1 output partitions (allowLocal=false) | |
15/08/08 23:11:59 INFO scheduler.DAGScheduler: Final stage: ResultStage 0(processCmd at CliDriver.java:423) | |
15/08/08 23:11:59 INFO scheduler.DAGScheduler: Parents of final stage: List() | |
15/08/08 23:11:59 INFO scheduler.DAGScheduler: Missing parents: List() | |
15/08/08 23:11:59 INFO scheduler.DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[3] at processCmd at CliDriver.java:423), which has no missing parents | |
15/08/08 23:11:59 INFO storage.MemoryStore: ensureFreeSpace(2976) called with curMem=255416, maxMem=2223023063 | |
15/08/08 23:11:59 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.9 KB, free 2.1 GB) | |
15/08/08 23:11:59 INFO storage.MemoryStore: ensureFreeSpace(1748) called with curMem=258392, maxMem=2223023063 | |
15/08/08 23:11:59 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1748.0 B, free 2.1 GB) | |
15/08/08 23:11:59 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on localhost:55606 (size: 1748.0 B, free: 2.1 GB) | |
15/08/08 23:11:59 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:874 | |
15/08/08 23:11:59 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[3] at processCmd at CliDriver.java:423) | |
15/08/08 23:11:59 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 1 tasks | |
15/08/08 23:11:59 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, localhost, PROCESS_LOCAL, 3985 bytes) | |
15/08/08 23:11:59 INFO executor.Executor: Running task 0.0 in stage 0.0 (TID 0) | |
15/08/08 23:11:59 INFO executor.Executor: Fetching http://192.168.1.4:55605/jars/com.databricks_spark-csv_2.10-1.0.3.jar with timestamp 1439089908852 | |
15/08/08 23:11:59 INFO util.Utils: Fetching http://192.168.1.4:55605/jars/com.databricks_spark-csv_2.10-1.0.3.jar to /Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/fetchFileTemp703165374757010544.tmp | |
15/08/08 23:11:59 INFO executor.Executor: Adding file:/Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/com.databricks_spark-csv_2.10-1.0.3.jar to class loader | |
15/08/08 23:11:59 INFO executor.Executor: Fetching http://192.168.1.4:55605/jars/com.lihaoyi_derive_2.10-0.3.4.jar with timestamp 1439089908853 | |
15/08/08 23:11:59 INFO util.Utils: Fetching http://192.168.1.4:55605/jars/com.lihaoyi_derive_2.10-0.3.4.jar to /Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/fetchFileTemp8518177886899742932.tmp | |
15/08/08 23:11:59 INFO executor.Executor: Adding file:/Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/com.lihaoyi_derive_2.10-0.3.4.jar to class loader | |
15/08/08 23:11:59 INFO executor.Executor: Fetching http://192.168.1.4:55605/jars/org.apache.commons_commons-csv-1.1.jar with timestamp 1439089908853 | |
15/08/08 23:11:59 INFO util.Utils: Fetching http://192.168.1.4:55605/jars/org.apache.commons_commons-csv-1.1.jar to /Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/fetchFileTemp7350593236314022543.tmp | |
15/08/08 23:11:59 INFO executor.Executor: Adding file:/Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/org.apache.commons_commons-csv-1.1.jar to class loader | |
15/08/08 23:11:59 INFO executor.Executor: Fetching http://192.168.1.4:55605/jars/com.lihaoyi_pprint_2.10-0.3.4.jar with timestamp 1439089908853 | |
15/08/08 23:11:59 INFO util.Utils: Fetching http://192.168.1.4:55605/jars/com.lihaoyi_pprint_2.10-0.3.4.jar to /Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/fetchFileTemp7196274430753674096.tmp | |
15/08/08 23:11:59 INFO executor.Executor: Adding file:/Users/sim/tmp/spark-e693f4d5-e906-48a2-964e-ce1a0fb02174/userFiles-358ccbac-1d37-4695-9801-1e2e61e3db3f/com.lihaoyi_pprint_2.10-0.3.4.jar to class loader | |
15/08/08 23:11:59 INFO executor.Executor: Finished task 0.0 in stage 0.0 (TID 0). 2893 bytes result sent to driver | |
15/08/08 23:12:00 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 222 ms on localhost (1/1) | |
15/08/08 23:12:00 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool | |
15/08/08 23:12:00 INFO scheduler.DAGScheduler: ResultStage 0 (processCmd at CliDriver.java:423) finished in 0.230 s | |
15/08/08 23:12:00 INFO scheduler.DAGScheduler: Job 0 finished: processCmd at CliDriver.java:423, took 0.273903 s | |
comp_config struct<adText:string,adTextLeft:string,background:string,brand:string,button_color:string,cta_side:string,cta_type:string,depth:string,fixed_under:string,light:string,mid_text:string,oneline:string,overhang:string,shine:string,style:string,style_secondary:string,style_small:string,type:string> | |
comp_criteria string | |
comp_data_model string | |
comp_dimensions struct<data:string,integrations:array<string>,template:string,variation:bigint> | |
comp_disabled boolean | |
comp_id bigint | |
comp_path string | |
comp_placementData struct<mod:string> | |
comp_slot_types array<string> | |
Time taken: 1.414 seconds, Fetched 9 row(s) | |
15/08/08 23:12:00 INFO CliDriver: Time taken: 1.414 seconds, Fetched 9 row(s) | |
spark-sql> 15/08/08 23:12:00 INFO scheduler.StatsReportListener: Finished stage: org.apache.spark.scheduler.StageInfo@26670e6e | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: task runtime:(count: 1, mean: 222.000000, stdev: 0.000000, max: 222.000000, min: 222.000000) | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 222.0 ms 222.0 ms 222.0 ms 222.0 ms 222.0 ms 222.0 ms 222.0 ms 222.0 ms 222.0 ms | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: task result size:(count: 1, mean: 2893.000000, stdev: 0.000000, max: 2893.000000, min: 2893.000000) | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 2.8 KB 2.8 KB 2.8 KB 2.8 KB 2.8 KB 2.8 KB 2.8 KB 2.8 KB 2.8 KB | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: executor (non-fetch) time pct: (count: 1, mean: 1.801802, stdev: 0.000000, max: 1.801802, min: 1.801802) | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 2 % 2 % 2 % 2 % 2 % 2 % 2 % 2 % 2 % | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: other time pct: (count: 1, mean: 98.198198, stdev: 0.000000, max: 98.198198, min: 98.198198) | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 0% 5% 10% 25% 50% 75% 90% 95% 100% | |
15/08/08 23:12:00 INFO scheduler.StatsReportListener: 98 % 98 % 98 % 98 % 98 % 98 % 98 % 98 % 98 % | |
> alter table dimension_components change comp_dimensions comp_dimensions struct<data:string,integrations:array<string>,template:string,variation:bigint,z:string>; | |
15/08/08 23:13:06 INFO parse.ParseDriver: Parsing command: alter table dimension_components change comp_dimensions comp_dimensions struct<data:string,integrations:array<string>,template:string,variation:bigint,z:string> | |
15/08/08 23:13:06 INFO parse.ParseDriver: Parse Completed | |
15/08/08 23:13:06 INFO log.PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:06 INFO log.PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:06 INFO ql.Driver: Concurrency mode is disabled, not creating a lock manager | |
15/08/08 23:13:06 INFO log.PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:06 INFO log.PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:06 INFO parse.ParseDriver: Parsing command: alter table dimension_components change comp_dimensions comp_dimensions struct<data:string,integrations:array<string>,template:string,variation:bigint,z:string> | |
15/08/08 23:13:06 INFO parse.ParseDriver: Parse Completed | |
15/08/08 23:13:06 INFO log.PerfLogger: </PERFLOG method=parse start=1439089986557 end=1439089986907 duration=350 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:06 INFO log.PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dimension_components | |
15/08/08 23:13:07 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_table : db=default tbl=dimension_components | |
15/08/08 23:13:07 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dimension_components | |
15/08/08 23:13:07 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_table : db=default tbl=dimension_components | |
15/08/08 23:13:07 INFO ql.Driver: Semantic Analysis Completed | |
15/08/08 23:13:07 INFO log.PerfLogger: </PERFLOG method=semanticAnalyze start=1439089986907 end=1439089987060 duration=153 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO ql.Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) | |
15/08/08 23:13:07 INFO log.PerfLogger: </PERFLOG method=compile start=1439089986532 end=1439089987066 duration=534 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO log.PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO ql.Driver: Starting command: alter table dimension_components change comp_dimensions comp_dimensions struct<data:string,integrations:array<string>,template:string,variation:bigint,z:string> | |
15/08/08 23:13:07 INFO log.PerfLogger: </PERFLOG method=TimeToSubmit start=1439089986530 end=1439089987069 duration=539 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO log.PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO log.PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO metastore.HiveMetaStore: 0: get_table : db=default tbl=dimension_components | |
15/08/08 23:13:07 INFO HiveMetaStore.audit: ugi=sim ip=unknown-ip-addr cmd=get_table : db=default tbl=dimension_components | |
15/08/08 23:13:07 ERROR exec.DDLTask: org.apache.hadoop.hive.ql.metadata.HiveException: Invalid column reference comp_dimensions | |
at org.apache.hadoop.hive.ql.exec.DDLTask.alterTable(DDLTask.java:3584) | |
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:312) | |
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) | |
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) | |
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503) | |
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270) | |
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088) | |
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911) | |
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:345) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:155) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:316) | |
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:473) | |
at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755) | |
at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283) | |
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid column reference comp_dimensions | |
15/08/08 23:13:07 ERROR ql.Driver: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid column reference comp_dimensions | |
15/08/08 23:13:07 INFO log.PerfLogger: </PERFLOG method=Driver.execute start=1439089987066 end=1439089987095 duration=29 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO log.PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 INFO log.PerfLogger: </PERFLOG method=releaseLocks start=1439089987095 end=1439089987095 duration=0 from=org.apache.hadoop.hive.ql.Driver> | |
15/08/08 23:13:07 ERROR client.ClientWrapper: | |
====================== | |
HIVE FAILURE OUTPUT | |
====================== | |
====================== | |
END HIVE FAILURE OUTPUT | |
====================== | |
15/08/08 23:13:07 ERROR thriftserver.SparkSQLDriver: Failed in [alter table dimension_components change comp_dimensions comp_dimensions struct<data:string,integrations:array<string>,template:string,variation:bigint,z:string>] | |
org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid column reference comp_dimensions | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:349) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:155) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:316) | |
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:473) | |
at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755) | |
at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283) | |
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid column reference comp_dimensions | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:349) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:155) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:316) | |
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:473) | |
at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755) | |
at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283) | |
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
15/08/08 23:13:07 ERROR CliDriver: org.apache.spark.sql.execution.QueryExecutionException: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Invalid column reference comp_dimensions | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:349) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:155) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:316) | |
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:473) | |
at org.apache.spark.sql.hive.execution.HiveNativeCommand.run(HiveNativeCommand.scala:33) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755) | |
at org.apache.spark.sql.hive.thriftserver.AbstractSparkSQLDriver.run(AbstractSparkSQLDriver.scala:57) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:283) | |
at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:423) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:218) | |
at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
spark-sql> |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment