Last active
September 24, 2018 06:22
-
-
Save ssimeonov/eeb388d13f802689d772 to your computer and use it in GitHub Desktop.
SPARK-9343: DROP IF EXISTS throws if a table is missing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
import org.apache.spark.sql.hive.HiveContext | |
val ctx = sqlContext.asInstanceOf[HiveContext] | |
import ctx.implicits._ | |
// Table test is not present | |
ctx.tableNames | |
// ERROR Hive: NoSuchObjectException(message:default.test table not found) | |
ctx.sql("drop table if exists test") |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
ubuntu@ip-10-88-50-154:~$ ~/spark-1.4.1-bin-hadoop2.6/bin/spark-shell --packages com.databricks:spark-csv_2.10:1.0.3 --driver-memory 52g --conf "spark.driver.extraJavaOptions=-XX:MaxPermSize=512m" --conf "spark.local.dir=/data/spark/tmp" | |
Ivy Default Cache set to: /home/ubuntu/.ivy2/cache | |
The jars for the packages stored in: /home/ubuntu/.ivy2/jars | |
:: loading settings :: url = jar:file:/home/ubuntu/spark-1.4.1-bin-hadoop2.6/lib/spark-assembly-1.4.1-hadoop2.6.0.jar!/org/apache/ivy/core/settings/ivysettings.xml | |
com.databricks#spark-csv_2.10 added as a dependency | |
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 | |
confs: [default] | |
found com.databricks#spark-csv_2.10;1.0.3 in central | |
found org.apache.commons#commons-csv;1.1 in central | |
:: resolution report :: resolve 211ms :: artifacts dl 6ms | |
:: modules in use: | |
com.databricks#spark-csv_2.10;1.0.3 from central in [default] | |
org.apache.commons#commons-csv;1.1 from central in [default] | |
--------------------------------------------------------------------- | |
| | modules || artifacts | | |
| conf | number| search|dwnlded|evicted|| number|dwnlded| | |
--------------------------------------------------------------------- | |
| default | 2 | 0 | 0 | 0 || 2 | 0 | | |
--------------------------------------------------------------------- | |
:: retrieving :: org.apache.spark#spark-submit-parent | |
confs: [default] | |
0 artifacts copied, 2 already retrieved (0kB/5ms) | |
log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). | |
log4j:WARN Please initialize the log4j system properly. | |
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
15/07/25 15:16:55 INFO SecurityManager: Changing view acls to: ubuntu | |
15/07/25 15:16:55 INFO SecurityManager: Changing modify acls to: ubuntu | |
15/07/25 15:16:55 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu) | |
15/07/25 15:16:55 INFO HttpServer: Starting HTTP Server | |
15/07/25 15:16:56 INFO Utils: Successfully started service 'HTTP class server' on port 50863. | |
Welcome to | |
____ __ | |
/ __/__ ___ _____/ /__ | |
_\ \/ _ \/ _ `/ __/ '_/ | |
/___/ .__/\_,_/_/ /_/\_\ version 1.4.1 | |
/_/ | |
Using Scala version 2.10.4 (OpenJDK 64-Bit Server VM, Java 1.7.0_79) | |
Type in expressions to have them evaluated. | |
Type :help for more information. | |
15/07/25 15:16:58 WARN Utils: Your hostname, ip-10-88-50-154 resolves to a loopback address: 127.0.0.1; using 10.88.50.154 instead (on interface eth0) | |
15/07/25 15:16:58 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address | |
15/07/25 15:16:58 INFO SparkContext: Running Spark version 1.4.1 | |
15/07/25 15:16:58 WARN SparkConf: In Spark 1.0 and later spark.local.dir will be overridden by the value set by the cluster manager (via SPARK_LOCAL_DIRS in mesos/standalone and LOCAL_DIRS in YARN). | |
15/07/25 15:16:59 INFO SecurityManager: Changing view acls to: ubuntu | |
15/07/25 15:16:59 INFO SecurityManager: Changing modify acls to: ubuntu | |
15/07/25 15:16:59 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ubuntu); users with modify permissions: Set(ubuntu) | |
15/07/25 15:16:59 INFO Slf4jLogger: Slf4jLogger started | |
15/07/25 15:16:59 INFO Remoting: Starting remoting | |
15/07/25 15:16:59 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:34398] | |
15/07/25 15:16:59 INFO Utils: Successfully started service 'sparkDriver' on port 34398. | |
15/07/25 15:16:59 INFO SparkEnv: Registering MapOutputTracker | |
15/07/25 15:16:59 INFO SparkEnv: Registering BlockManagerMaster | |
15/07/25 15:16:59 INFO DiskBlockManager: Created local directory at /data/spark/tmp/spark-ef8a57c4-2f9e-4c19-aeda-f022637df165/blockmgr-2527b2c6-2576-48cb-8cf8-aab6301cdcc9 | |
15/07/25 15:16:59 INFO MemoryStore: MemoryStore started with capacity 26.9 GB | |
15/07/25 15:16:59 INFO HttpFileServer: HTTP File server directory is /data/spark/tmp/spark-ef8a57c4-2f9e-4c19-aeda-f022637df165/httpd-11a4291f-3e16-4d75-960f-b8c84339ad5e | |
15/07/25 15:16:59 INFO HttpServer: Starting HTTP Server | |
15/07/25 15:16:59 INFO Utils: Successfully started service 'HTTP file server' on port 51912. | |
15/07/25 15:16:59 INFO SparkEnv: Registering OutputCommitCoordinator | |
15/07/25 15:16:59 INFO Utils: Successfully started service 'SparkUI' on port 4040. | |
15/07/25 15:16:59 INFO SparkUI: Started SparkUI at http://10.88.50.154:4040 | |
15/07/25 15:16:59 INFO SparkContext: Added JAR file:/home/ubuntu/.ivy2/jars/com.databricks_spark-csv_2.10-1.0.3.jar at http://10.88.50.154:51912/jars/com.databricks_spark-csv_2.10-1.0.3.jar with timestamp 1437837419751 | |
15/07/25 15:16:59 INFO SparkContext: Added JAR file:/home/ubuntu/.ivy2/jars/org.apache.commons_commons-csv-1.1.jar at http://10.88.50.154:51912/jars/org.apache.commons_commons-csv-1.1.jar with timestamp 1437837419752 | |
15/07/25 15:16:59 INFO Executor: Starting executor ID driver on host localhost | |
15/07/25 15:16:59 INFO Executor: Using REPL class URI: http://10.88.50.154:50863 | |
15/07/25 15:16:59 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 34739. | |
15/07/25 15:16:59 INFO NettyBlockTransferService: Server created on 34739 | |
15/07/25 15:16:59 INFO BlockManagerMaster: Trying to register BlockManager | |
15/07/25 15:16:59 INFO BlockManagerMasterEndpoint: Registering block manager localhost:34739 with 26.9 GB RAM, BlockManagerId(driver, localhost, 34739) | |
15/07/25 15:16:59 INFO BlockManagerMaster: Registered BlockManager | |
15/07/25 15:17:00 INFO SparkILoop: Created spark context.. | |
Spark context available as sc. | |
15/07/25 15:17:00 INFO HiveContext: Initializing execution hive, version 0.13.1 | |
15/07/25 15:17:00 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/07/25 15:17:00 INFO ObjectStore: ObjectStore, initialize called | |
15/07/25 15:17:00 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/07/25 15:17:00 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/07/25 15:17:00 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) | |
15/07/25 15:17:01 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) | |
15/07/25 15:17:02 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/07/25 15:17:02 INFO MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". | |
15/07/25 15:17:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:02 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:04 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:04 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:04 INFO ObjectStore: Initialized ObjectStore | |
15/07/25 15:17:04 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa | |
15/07/25 15:17:05 INFO HiveMetaStore: Added admin role in metastore | |
15/07/25 15:17:05 INFO HiveMetaStore: Added public role in metastore | |
15/07/25 15:17:05 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/07/25 15:17:05 INFO SessionState: No Tez session required at this point. hive.execution.engine=mr. | |
15/07/25 15:17:05 INFO SparkILoop: Created sql context (with Hive support).. | |
SQL context available as sqlContext. | |
scala> import org.apache.spark.sql.hive.HiveContext | |
import org.apache.spark.sql.hive.HiveContext | |
scala> | |
scala> val ctx = sqlContext.asInstanceOf[HiveContext] | |
ctx: org.apache.spark.sql.hive.HiveContext = org.apache.spark.sql.hive.HiveContext@34a0ab9d | |
scala> import ctx.implicits._ | |
import ctx.implicits._ | |
scala> | |
scala> // Table test is not present | |
scala> ctx.tableNames | |
15/07/25 15:17:27 INFO HiveContext: Initializing HiveMetastoreConnection version 0.13.1 using Spark classes. | |
15/07/25 15:17:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
15/07/25 15:17:28 INFO HiveMetaStore: 0: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore | |
15/07/25 15:17:28 INFO ObjectStore: ObjectStore, initialize called | |
15/07/25 15:17:28 INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored | |
15/07/25 15:17:28 INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored | |
15/07/25 15:17:28 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) | |
15/07/25 15:17:28 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) | |
15/07/25 15:17:29 INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" | |
15/07/25 15:17:29 INFO MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line 1, column 5. Encountered: "@" (64), after : "". | |
15/07/25 15:17:29 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:29 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:30 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:30 INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. | |
15/07/25 15:17:30 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery@0" since the connection used is closing | |
15/07/25 15:17:30 INFO ObjectStore: Initialized ObjectStore | |
15/07/25 15:17:30 INFO HiveMetaStore: Added admin role in metastore | |
15/07/25 15:17:30 INFO HiveMetaStore: Added public role in metastore | |
15/07/25 15:17:30 INFO HiveMetaStore: No user is added in admin role, since config is empty | |
15/07/25 15:17:31 INFO SessionState: No Tez session required at this point. hive.execution.engine=mr. | |
15/07/25 15:17:31 INFO HiveMetaStore: 0: get_tables: db=default pat=.* | |
15/07/25 15:17:31 INFO audit: ugi=ubuntu ip=unknown-ip-addr cmd=get_tables: db=default pat=.* | |
res0: Array[String] = Array(agg_origin_by_platform_15m, dimension_components, dimension_creatives, dimension_domains, dimension_keywords, view_clicks, view_clicks_samples) | |
scala> | |
scala> // ERROR Hive: NoSuchObjectException(message:default.test table not found) | |
scala> ctx.sql("drop table if exists test") | |
15/07/25 15:17:31 INFO ParseDriver: Parsing command: drop table if exists test | |
15/07/25 15:17:31 INFO ParseDriver: Parse Completed | |
15/07/25 15:17:31 INFO HiveMetaStore: 0: get_table : db=default tbl=test | |
15/07/25 15:17:31 INFO audit: ugi=ubuntu ip=unknown-ip-addr cmd=get_table : db=default tbl=test | |
15/07/25 15:17:31 INFO PerfLogger: <PERFLOG method=Driver.run from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:31 INFO PerfLogger: <PERFLOG method=TimeToSubmit from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:31 INFO Driver: Concurrency mode is disabled, not creating a lock manager | |
15/07/25 15:17:31 INFO PerfLogger: <PERFLOG method=compile from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:31 INFO PerfLogger: <PERFLOG method=parse from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:31 INFO ParseDriver: Parsing command: DROP TABLE IF EXISTS test | |
15/07/25 15:17:32 INFO ParseDriver: Parse Completed | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=parse start=1437837451932 end=1437837452185 duration=253 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: <PERFLOG method=semanticAnalyze from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO HiveMetaStore: 0: get_table : db=default tbl=test | |
15/07/25 15:17:32 INFO audit: ugi=ubuntu ip=unknown-ip-addr cmd=get_table : db=default tbl=test | |
15/07/25 15:17:32 INFO Driver: Semantic Analysis Completed | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=semanticAnalyze start=1437837452185 end=1437837452222 duration=37 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO Driver: Returning Hive schema: Schema(fieldSchemas:null, properties:null) | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=compile start=1437837451911 end=1437837452228 duration=317 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: <PERFLOG method=Driver.execute from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO Driver: Starting command: DROP TABLE IF EXISTS test | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=TimeToSubmit start=1437837451909 end=1437837452243 duration=334 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: <PERFLOG method=runTasks from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: <PERFLOG method=task.DDL.Stage-0 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO HiveMetaStore: 0: get_table : db=default tbl=test | |
15/07/25 15:17:32 INFO audit: ugi=ubuntu ip=unknown-ip-addr cmd=get_table : db=default tbl=test | |
15/07/25 15:17:32 ERROR Hive: NoSuchObjectException(message:default.test table not found) | |
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_table(HiveMetaStore.java:1560) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:105) | |
at com.sun.proxy.$Proxy27.get_table(Unknown Source) | |
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getTable(HiveMetaStoreClient.java:997) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:89) | |
at com.sun.proxy.$Proxy28.getTable(Unknown Source) | |
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:976) | |
at org.apache.hadoop.hive.ql.metadata.Hive.getTable(Hive.java:918) | |
at org.apache.hadoop.hive.ql.exec.DDLTask.dropTableOrPartitions(DDLTask.java:3846) | |
at org.apache.hadoop.hive.ql.exec.DDLTask.execute(DDLTask.java:306) | |
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:153) | |
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:85) | |
at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1503) | |
at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1270) | |
at org.apache.hadoop.hive.ql.Driver.runInternal(Driver.java:1088) | |
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:911) | |
at org.apache.hadoop.hive.ql.Driver.run(Driver.java:901) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:345) | |
at org.apache.spark.sql.hive.client.ClientWrapper$$anonfun$runHive$1.apply(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.withHiveState(ClientWrapper.scala:155) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runHive(ClientWrapper.scala:326) | |
at org.apache.spark.sql.hive.client.ClientWrapper.runSqlHive(ClientWrapper.scala:316) | |
at org.apache.spark.sql.hive.HiveContext.runSqlHive(HiveContext.scala:473) | |
at org.apache.spark.sql.hive.execution.DropTable.run(commands.scala:71) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult$lzycompute(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.sideEffectResult(commands.scala:57) | |
at org.apache.spark.sql.execution.ExecutedCommand.doExecute(commands.scala:68) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:88) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:147) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:87) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd$lzycompute(SQLContext.scala:950) | |
at org.apache.spark.sql.SQLContext$QueryExecution.toRdd(SQLContext.scala:950) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:144) | |
at org.apache.spark.sql.DataFrame.<init>(DataFrame.scala:128) | |
at org.apache.spark.sql.DataFrame$.apply(DataFrame.scala:51) | |
at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:755) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:26) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:31) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:33) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:35) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:37) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC$$iwC.<init>(<console>:39) | |
at $line25.$read$$iwC$$iwC$$iwC$$iwC.<init>(<console>:41) | |
at $line25.$read$$iwC$$iwC$$iwC.<init>(<console>:43) | |
at $line25.$read$$iwC$$iwC.<init>(<console>:45) | |
at $line25.$read$$iwC.<init>(<console>:47) | |
at $line25.$read.<init>(<console>:49) | |
at $line25.$read$.<init>(<console>:53) | |
at $line25.$read$.<clinit>(<console>) | |
at $line25.$eval$.<init>(<console>:7) | |
at $line25.$eval$.<clinit>(<console>) | |
at $line25.$eval.$print(<console>) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065) | |
at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338) | |
at org.apache.spark.repl.SparkIMain.loadAndRunReq$1(SparkIMain.scala:840) | |
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871) | |
at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819) | |
at org.apache.spark.repl.SparkILoop.reallyInterpret$1(SparkILoop.scala:857) | |
at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:902) | |
at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:814) | |
at org.apache.spark.repl.SparkILoop.processLine$1(SparkILoop.scala:657) | |
at org.apache.spark.repl.SparkILoop.innerLoop$1(SparkILoop.scala:665) | |
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$loop(SparkILoop.scala:670) | |
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply$mcZ$sp(SparkILoop.scala:997) | |
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) | |
at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$1.apply(SparkILoop.scala:945) | |
at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135) | |
at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:945) | |
at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1059) | |
at org.apache.spark.repl.Main$.main(Main.scala:31) | |
at org.apache.spark.repl.Main.main(Main.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:606) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:665) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:170) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:193) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:112) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
15/07/25 15:17:32 INFO HiveMetaStore: 0: get_table : db=default tbl=test | |
15/07/25 15:17:32 INFO audit: ugi=ubuntu ip=unknown-ip-addr cmd=get_table : db=default tbl=test | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=runTasks start=1437837452243 end=1437837452305 duration=62 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=Driver.execute start=1437837452228 end=1437837452305 duration=77 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO Driver: OK | |
15/07/25 15:17:32 INFO PerfLogger: <PERFLOG method=releaseLocks from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=releaseLocks start=1437837452309 end=1437837452309 duration=0 from=org.apache.hadoop.hive.ql.Driver> | |
15/07/25 15:17:32 INFO PerfLogger: </PERFLOG method=Driver.run start=1437837451909 end=1437837452309 duration=400 from=org.apache.hadoop.hive.ql.Driver> | |
res1: org.apache.spark.sql.DataFrame = [] | |
scala> |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey did you get resolution for this? I am also facing the same error with Hive in SPark