Skip to content

Instantly share code, notes, and snippets.

@nsivabalan
Created July 16, 2020 00:22
Show Gist options
  • Save nsivabalan/05edd850bae81dcc1345686a0399003b to your computer and use it in GitHub Desktop.
Save nsivabalan/05edd850bae81dcc1345686a0399003b to your computer and use it in GitHub Desktop.
16020 [main] WARN org.apache.spark.util.Utils - Service 'SparkUI' could not bind on port 4040. Attempting port 4041.
16020 [main] WARN org.apache.spark.util.Utils - Service 'SparkUI' could not bind on port 4041. Attempting port 4042.
16834 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
16841 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
18296 [dispatcher-event-loop-0] WARN org.apache.spark.scheduler.TaskSetManager - Stage 0 contains a task of very large size (196 KB). The maximum recommended task size is 100 KB.
20747 [dispatcher-event-loop-1] WARN org.apache.spark.scheduler.TaskSetManager - Stage 1 contains a task of very large size (196 KB). The maximum recommended task size is 100 KB.
21805 [dispatcher-event-loop-0] WARN org.apache.spark.scheduler.TaskSetManager - Stage 2 contains a task of very large size (196 KB). The maximum recommended task size is 100 KB.
22062 [dispatcher-event-loop-0] WARN org.apache.spark.scheduler.TaskSetManager - Stage 3 contains a task of very large size (196 KB). The maximum recommended task size is 100 KB.
22590 [dispatcher-event-loop-0] WARN org.apache.spark.scheduler.TaskSetManager - Stage 4 contains a task of very large size (196 KB). The maximum recommended task size is 100 KB.
23812 [IPC Server handler 7 on 63849] WARN org.apache.hadoop.hdfs.StateChange - DIR* FSDirectory.unprotectedRenameTo: failed to rename /user/sivabala/test_table2/default/.hoodie_partition_metadata_0 to /user/sivabala/test_table2/default/.hoodie_partition_metadata because destination exists
24908 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
24908 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
25956 [pool-18-thread-4] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
25957 [pool-18-thread-4] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
26010 [pool-18-thread-5] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
26011 [pool-18-thread-5] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
OK
26713 [HMSHandler #0] WARN hive.log - Updating partition stats fast for: hive_trips
26717 [HMSHandler #0] WARN hive.log - Updated size to 991555
OK
26918 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
27684 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
28141 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
28568 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
30660 [IPC Server handler 7 on 63849] WARN org.apache.hadoop.hdfs.StateChange - DIR* FSDirectory.unprotectedRenameTo: failed to rename /user/sivabala/test_downstream_table2/default/.hoodie_partition_metadata_1 to /user/sivabala/test_downstream_table2/default/.hoodie_partition_metadata because destination exists
31540 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
31820 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
32113 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
32419 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
32455 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
32721 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
32947 [DataXceiver for client DFSClient_NONMAPREDUCE_1724307773_1 at /127.0.0.1:63942 [Sending block BP-1760674547-127.0.0.1-1594858734425:blk_1073741851_1027]] ERROR org.apache.hadoop.hdfs.server.datanode.DataNode - BlockSender.sendChunks() exception:
java.io.IOException: Protocol wrong type for socket
at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
at sun.nio.ch.IOUtil.write(IOUtil.java:65)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:577)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.doSendBlock(BlockSender.java:773)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:710)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:552)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:253)
at java.lang.Thread.run(Thread.java:748)
33013 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
33378 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
33409 [main] WARN org.apache.hudi.utilities.sources.HoodieIncrSource - Already caught up. Begin Checkpoint was :20200715201915
33416 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
33684 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
33989 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
34284 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
35025 [dispatcher-event-loop-1] WARN org.apache.spark.scheduler.TaskSetManager - Stage 55 contains a task of very large size (384 KB). The maximum recommended task size is 100 KB.
35173 [dispatcher-event-loop-1] WARN org.apache.spark.scheduler.TaskSetManager - Stage 56 contains a task of very large size (384 KB). The maximum recommended task size is 100 KB.
35378 [dispatcher-event-loop-1] WARN org.apache.spark.scheduler.TaskSetManager - Stage 57 contains a task of very large size (384 KB). The maximum recommended task size is 100 KB.
35552 [dispatcher-event-loop-0] WARN org.apache.spark.scheduler.TaskSetManager - Stage 58 contains a task of very large size (384 KB). The maximum recommended task size is 100 KB.
40138 [qtp2010081478-719] WARN org.apache.hudi.common.table.view.IncrementalTimelineSyncFileSystemView - Incremental Sync of timeline is turned off or deemed unsafe. Will revert to full syncing
40190 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
40190 [main] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
40368 [pool-18-thread-6] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
40368 [pool-18-thread-6] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
40419 [pool-18-thread-7] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
40419 [pool-18-thread-7] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
40612 [pool-18-thread-6] WARN org.apache.hadoop.hive.metastore.MetaStoreDirectSql - Failed to execute [SELECT "DBS"."NAME", "TBLS"."TBL_NAME", "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND "TBLS"."TBL_NAME" = ?] with parameters [testdb1, hive_trips]
javax.jdo.JDODataStoreException: Error executing SQL query "SELECT "DBS"."NAME", "TBLS"."TBL_NAME", "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND "TBLS"."TBL_NAME" = ?".
at org.datanucleus.api.jdo.NucleusJDOHelper.getJDOExceptionForNucleusException(NucleusJDOHelper.java:543)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:391)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939)
at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8551)
at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8547)
at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2789)
at org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8559)
at org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8537)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy34.getPrimaryKeys(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_primary_keys(HiveMetaStore.java:6828)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy36.get_primary_keys(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_primary_keys.getResult(ThriftHiveMetastore.java:12907)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_primary_keys.getResult(ThriftHiveMetastore.java:12891)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
NestedThrowablesStackTrace:
java.sql.SQLSyntaxErrorException: Table/View 'KEY_CONSTRAINTS' does not exist.
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.Util.generateCsSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.wrapInSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.TransactionResourceImpl.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.ConnectionChild.handleException(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement42.<init>(Unknown Source)
at org.apache.derby.jdbc.Driver42.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at com.jolbox.bonecp.ConnectionHandle.prepareStatement(ConnectionHandle.java:1193)
at org.datanucleus.store.rdbms.SQLController.getStatementForQuery(SQLController.java:345)
at org.datanucleus.store.rdbms.query.RDBMSQueryUtils.getPreparedStatementForQuery(RDBMSQueryUtils.java:211)
at org.datanucleus.store.rdbms.query.SQLQuery.performExecute(SQLQuery.java:633)
at org.datanucleus.store.query.Query.executeQuery(Query.java:1855)
at org.datanucleus.store.rdbms.query.SQLQuery.executeWithArray(SQLQuery.java:807)
at org.datanucleus.api.jdo.JDOQuery.executeInternal(JDOQuery.java:368)
at org.datanucleus.api.jdo.JDOQuery.executeWithArray(JDOQuery.java:267)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1750)
at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939)
at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8551)
at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8547)
at org.apache.hadoop.hive.metastore.ObjectStore$GetHelper.run(ObjectStore.java:2789)
at org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeysInternal(ObjectStore.java:8559)
at org.apache.hadoop.hive.metastore.ObjectStore.getPrimaryKeys(ObjectStore.java:8537)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RawStoreProxy.invoke(RawStoreProxy.java:101)
at com.sun.proxy.$Proxy34.getPrimaryKeys(Unknown Source)
at org.apache.hadoop.hive.metastore.HiveMetaStore$HMSHandler.get_primary_keys(HiveMetaStore.java:6828)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invokeInternal(RetryingHMSHandler.java:148)
at org.apache.hadoop.hive.metastore.RetryingHMSHandler.invoke(RetryingHMSHandler.java:107)
at com.sun.proxy.$Proxy36.get_primary_keys(Unknown Source)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_primary_keys.getResult(ThriftHiveMetastore.java:12907)
at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Processor$get_primary_keys.getResult(ThriftHiveMetastore.java:12891)
at org.apache.thrift.ProcessFunction.process(ProcessFunction.java:39)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:110)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor$1.run(TUGIBasedProcessor.java:106)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.hive.metastore.TUGIBasedProcessor.process(TUGIBasedProcessor.java:118)
at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:286)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.sql.SQLException: Table/View 'KEY_CONSTRAINTS' does not exist.
at org.apache.derby.impl.jdbc.SQLExceptionFactory.getSQLException(Unknown Source)
at org.apache.derby.impl.jdbc.SQLExceptionFactory40.wrapArgsForTransportAcrossDRDA(Unknown Source)
... 56 more
Caused by: ERROR 42X05: Table/View 'KEY_CONSTRAINTS' does not exist.
at org.apache.derby.iapi.error.StandardException.newException(Unknown Source)
at org.apache.derby.impl.sql.compile.FromBaseTable.bindTableDescriptor(Unknown Source)
at org.apache.derby.impl.sql.compile.FromBaseTable.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.TableOperatorNode.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.FromList.bindTables(Unknown Source)
at org.apache.derby.impl.sql.compile.SelectNode.bindNonVTITables(Unknown Source)
at org.apache.derby.impl.sql.compile.DMLStatementNode.bindTables(Unknown Source)
at org.apache.derby.impl.sql.compile.DMLStatementNode.bind(Unknown Source)
at org.apache.derby.impl.sql.compile.CursorNode.bindStatement(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
... 50 more
40618 [pool-18-thread-6] WARN org.apache.hadoop.hive.metastore.ObjectStore - Falling back to ORM path due to direct SQL failure (this is not an error): See previous errors; Error executing SQL query "SELECT "DBS"."NAME", "TBLS"."TBL_NAME", "COLUMNS_V2"."COLUMN_NAME","KEY_CONSTRAINTS"."POSITION", "KEY_CONSTRAINTS"."CONSTRAINT_NAME", "KEY_CONSTRAINTS"."ENABLE_VALIDATE_RELY" FROM "TBLS" INNER JOIN "KEY_CONSTRAINTS" ON "TBLS"."TBL_ID" = "KEY_CONSTRAINTS"."PARENT_TBL_ID" INNER JOIN "DBS" ON "TBLS"."DB_ID" = "DBS"."DB_ID" INNER JOIN "COLUMNS_V2" ON "COLUMNS_V2"."CD_ID" = "KEY_CONSTRAINTS"."PARENT_CD_ID" AND "COLUMNS_V2"."INTEGER_IDX" = "KEY_CONSTRAINTS"."PARENT_INTEGER_IDX" WHERE "KEY_CONSTRAINTS"."CONSTRAINT_TYPE" = 0 AND "DBS"."NAME" = ? AND "TBLS"."TBL_NAME" = ?". at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.executeWithArray(MetaStoreDirectSql.java:1762) at org.apache.hadoop.hive.metastore.MetaStoreDirectSql.getPrimaryKeys(MetaStoreDirectSql.java:1939) at org.apache.hadoop.hive.metastore.ObjectStore$11.getSqlResult(ObjectStore.java:8551)
41427 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
41678 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
41946 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
42255 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
49714 [main] WARN org.apache.spark.sql.SparkSession$Builder - Using an existing SparkSession; some configuration may not take effect.
53928 [qtp7430853-787] WARN org.apache.hudi.common.table.view.IncrementalTimelineSyncFileSystemView - Incremental Sync of timeline is turned off or deemed unsafe. Will revert to full syncing
53938 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
54182 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
54446 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
54727 [main] WARN org.apache.hudi.DefaultSource - Snapshot view not supported yet via data source, for MERGE_ON_READ tables. Please query the Hive table registered using Spark SQL.
61925 [HiveServer2-Handler-Pool: Thread-860] WARN org.apache.hadoop.security.ShellBasedUnixGroupsMapping - got exception trying to get groups for user anonymous: id: anonymous: no such user
id: anonymous: no such user
62012 [pool-18-thread-8] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
62013 [pool-18-thread-8] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
62061 [pool-18-thread-9] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.metastore.local does not exist
62061 [pool-18-thread-9] WARN org.apache.hadoop.hive.conf.HiveConf - HiveConf of name hive.internal.ss.authz.settings.applied.marker does not exist
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment