Skip to content

Instantly share code, notes, and snippets.

@nsivabalan
Created November 11, 2021 00:30
Show Gist options
  • Save nsivabalan/44d0906287187a81b7ec9c954153e9ee to your computer and use it in GitHub Desktop.
Save nsivabalan/44d0906287187a81b7ec9c954153e9ee to your computer and use it in GitHub Desktop.
scala> df.write.format("hudi").
| option("hoodie.datasource.write.recordkey.field","col_str_0005").
| option("hoodie.datasource.write.keygenerator.class","org.apache.hudi.keygen.NonpartitionedKeyGenerator").
| option("hoodie.datasource.write.operation","bulk_insert").
| option("hoodie.table.name","hudi_binsert_base").
| mode(Overwrite).
| save(basePath)
21/11/10 21:59:31 WARN util.package: Truncated the string representation of a plan since it was too large. This behavior can be adjusted by setting 'spark.sql.debug.maxToStringFields'.
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 6262720062168238709 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 12 at RPC address 172.31.35.206:59290, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 6262720062168238709 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 12 on ip-172-31-35-206.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 75.0 in stage 17.0 (TID 6545, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 15.0 in stage 17.0 (TID 6485, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 105.0 in stage 17.0 (TID 6575, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 60.0 in stage 17.0 (TID 6530, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 17.0 (TID 6470, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 90.0 in stage 17.0 (TID 6560, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 45.0 in stage 17.0 (TID 6515, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 30.0 in stage 17.0 (TID 6500, ip-172-31-35-206.us-east-2.compute.internal, executor 12): ExecutorLostFailure (executor 12 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 5855181747965535972 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 15 at RPC address 172.31.35.65:37692, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 5855181747965535972 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 15 on ip-172-31-35-65.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 98.0 in stage 17.0 (TID 6568, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 38.0 in stage 17.0 (TID 6508, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 83.0 in stage 17.0 (TID 6553, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 113.0 in stage 17.0 (TID 6583, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 68.0 in stage 17.0 (TID 6538, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 23.0 in stage 17.0 (TID 6493, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 53.0 in stage 17.0 (TID 6523, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 8.0 in stage 17.0 (TID 6478, ip-172-31-35-65.us-east-2.compute.internal, executor 15): ExecutorLostFailure (executor 15 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 8935110370652232434 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 2 at RPC address 172.31.35.73:50964, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 8935110370652232434 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 2 on ip-172-31-35-73.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 100.0 in stage 17.0 (TID 6570, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 55.0 in stage 17.0 (TID 6525, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 10.0 in stage 17.0 (TID 6480, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 85.0 in stage 17.0 (TID 6555, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 40.0 in stage 17.0 (TID 6510, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 25.0 in stage 17.0 (TID 6495, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 70.0 in stage 17.0 (TID 6540, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 115.0 in stage 17.0 (TID 6585, ip-172-31-35-73.us-east-2.compute.internal, executor 2): ExecutorLostFailure (executor 2 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 4691165587562492522 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 7 at RPC address 172.31.41.90:58256, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 4691165587562492522 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 7 on ip-172-31-41-90.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 65.0 in stage 17.0 (TID 6535, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 110.0 in stage 17.0 (TID 6580, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 50.0 in stage 17.0 (TID 6520, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 5.0 in stage 17.0 (TID 6475, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 95.0 in stage 17.0 (TID 6565, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 80.0 in stage 17.0 (TID 6550, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 35.0 in stage 17.0 (TID 6505, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 20.0 in stage 17.0 (TID 6490, ip-172-31-41-90.us-east-2.compute.internal, executor 7): ExecutorLostFailure (executor 7 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 7852910140474649376 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 8 at RPC address 172.31.41.90:58258, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 7852910140474649376 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 8 on ip-172-31-41-90.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 46.0 in stage 17.0 (TID 6516, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 1.0 in stage 17.0 (TID 6471, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 91.0 in stage 17.0 (TID 6561, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 76.0 in stage 17.0 (TID 6546, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 31.0 in stage 17.0 (TID 6501, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 16.0 in stage 17.0 (TID 6486, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 106.0 in stage 17.0 (TID 6576, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 61.0 in stage 17.0 (TID 6531, ip-172-31-41-90.us-east-2.compute.internal, executor 8): ExecutorLostFailure (executor 8 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 7392262507002508603 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 6 at RPC address 172.31.39.205:37714, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 7392262507002508603 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 6 on ip-172-31-39-205.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 89.0 in stage 17.0 (TID 6559, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 74.0 in stage 17.0 (TID 6544, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 29.0 in stage 17.0 (TID 6499, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 119.0 in stage 17.0 (TID 6589, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 104.0 in stage 17.0 (TID 6574, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 59.0 in stage 17.0 (TID 6529, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 14.0 in stage 17.0 (TID 6484, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 44.0 in stage 17.0 (TID 6514, ip-172-31-39-205.us-east-2.compute.internal, executor 6): ExecutorLostFailure (executor 6 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 8207679994452269688 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 4 at RPC address 172.31.46.2:33210, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 8207679994452269688 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 4 on ip-172-31-46-2.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 116.0 in stage 17.0 (TID 6586, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 101.0 in stage 17.0 (TID 6571, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 56.0 in stage 17.0 (TID 6526, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 41.0 in stage 17.0 (TID 6511, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 86.0 in stage 17.0 (TID 6556, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 26.0 in stage 17.0 (TID 6496, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 71.0 in stage 17.0 (TID 6541, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 11.0 in stage 17.0 (TID 6481, ip-172-31-46-2.us-east-2.compute.internal, executor 4): ExecutorLostFailure (executor 4 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 5603642647492718581 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 11 at RPC address 172.31.35.206:59292, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 5603642647492718581 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 ERROR client.TransportClient: Failed to send RPC RPC 7258985432857336568 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 11 on ip-172-31-35-206.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 1 at RPC address 172.31.35.73:50966, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 7258985432857336568 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 107.0 in stage 17.0 (TID 6577, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 47.0 in stage 17.0 (TID 6517, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 92.0 in stage 17.0 (TID 6562, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 77.0 in stage 17.0 (TID 6547, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 32.0 in stage 17.0 (TID 6502, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 17.0 in stage 17.0 (TID 6487, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 62.0 in stage 17.0 (TID 6532, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 2.0 in stage 17.0 (TID 6472, ip-172-31-35-206.us-east-2.compute.internal, executor 11): ExecutorLostFailure (executor 11 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 ERROR cluster.YarnScheduler: Lost executor 1 on ip-172-31-35-73.us-east-2.compute.internal: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 93.0 in stage 17.0 (TID 6563, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 33.0 in stage 17.0 (TID 6503, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 78.0 in stage 17.0 (TID 6548, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 63.0 in stage 17.0 (TID 6533, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 18.0 in stage 17.0 (TID 6488, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 108.0 in stage 17.0 (TID 6578, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 48.0 in stage 17.0 (TID 6518, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:41 WARN scheduler.TaskSetManager: Lost task 3.0 in stage 17.0 (TID 6473, ip-172-31-35-73.us-east-2.compute.internal, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 ERROR client.TransportClient: Failed to send RPC RPC 8013760197165601884 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 14 at RPC address 172.31.47.161:42412, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 8013760197165601884 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:42 ERROR cluster.YarnScheduler: Lost executor 14 on ip-172-31-47-161.us-east-2.compute.internal: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 109.0 in stage 17.0 (TID 6579, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 64.0 in stage 17.0 (TID 6534, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 19.0 in stage 17.0 (TID 6489, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 94.0 in stage 17.0 (TID 6564, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 4.0 in stage 17.0 (TID 6474, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 49.0 in stage 17.0 (TID 6519, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 34.0 in stage 17.0 (TID 6504, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 79.0 in stage 17.0 (TID 6549, ip-172-31-47-161.us-east-2.compute.internal, executor 14): ExecutorLostFailure (executor 14 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 ERROR client.TransportClient: Failed to send RPC RPC 8282907392084411478 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 5 at RPC address 172.31.39.205:37716, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 8282907392084411478 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:42 ERROR cluster.YarnScheduler: Lost executor 5 on ip-172-31-39-205.us-east-2.compute.internal: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 102.0 in stage 17.0 (TID 6572, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 42.0 in stage 17.0 (TID 6512, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 87.0 in stage 17.0 (TID 6557, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 72.0 in stage 17.0 (TID 6542, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 27.0 in stage 17.0 (TID 6497, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 117.0 in stage 17.0 (TID 6587, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 57.0 in stage 17.0 (TID 6527, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 12.0 in stage 17.0 (TID 6482, ip-172-31-39-205.us-east-2.compute.internal, executor 5): ExecutorLostFailure (executor 5 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 ERROR client.TransportClient: Failed to send RPC RPC 9148773839142634624 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 13 at RPC address 172.31.47.161:42414, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 9148773839142634624 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:42 ERROR cluster.YarnScheduler: Lost executor 13 on ip-172-31-47-161.us-east-2.compute.internal: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 111.0 in stage 17.0 (TID 6581, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 66.0 in stage 17.0 (TID 6536, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 96.0 in stage 17.0 (TID 6566, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 51.0 in stage 17.0 (TID 6521, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 6.0 in stage 17.0 (TID 6476, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 81.0 in stage 17.0 (TID 6551, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 36.0 in stage 17.0 (TID 6506, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 21.0 in stage 17.0 (TID 6491, ip-172-31-47-161.us-east-2.compute.internal, executor 13): ExecutorLostFailure (executor 13 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 ERROR client.TransportClient: Failed to send RPC RPC 8134148267437261295 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 9 at RPC address 172.31.35.155:50400, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 8134148267437261295 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:42 ERROR cluster.YarnScheduler: Lost executor 9 on ip-172-31-35-155.us-east-2.compute.internal: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 84.0 in stage 17.0 (TID 6554, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 24.0 in stage 17.0 (TID 6494, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 114.0 in stage 17.0 (TID 6584, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 69.0 in stage 17.0 (TID 6539, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 54.0 in stage 17.0 (TID 6524, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 99.0 in stage 17.0 (TID 6569, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 9.0 in stage 17.0 (TID 6479, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 39.0 in stage 17.0 (TID 6509, ip-172-31-35-155.us-east-2.compute.internal, executor 9): ExecutorLostFailure (executor 9 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 ERROR client.TransportClient: Failed to send RPC RPC 9080375028442950185 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 10 at RPC address 172.31.35.155:50398, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 9080375028442950185 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
21/11/10 22:11:42 ERROR cluster.YarnScheduler: Lost executor 10 on ip-172-31-35-155.us-east-2.compute.internal: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 28.0 in stage 17.0 (TID 6498, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 118.0 in stage 17.0 (TID 6588, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 73.0 in stage 17.0 (TID 6543, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 13.0 in stage 17.0 (TID 6483, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 58.0 in stage 17.0 (TID 6528, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 103.0 in stage 17.0 (TID 6573, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 88.0 in stage 17.0 (TID 6558, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 WARN scheduler.TaskSetManager: Lost task 43.0 in stage 17.0 (TID 6513, ip-172-31-35-155.us-east-2.compute.internal, executor 10): ExecutorLostFailure (executor 10 exited caused by one of the running tasks) Reason: Slave lost
21/11/10 22:11:42 ERROR client.TransportClient: Failed to send RPC RPC 6612597259409757764 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
21/11/10 22:11:42 WARN cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: Attempted to get executor loss reason for executor id 3 at RPC address 172.31.46.2:33208, but got no response. Marking as slave lost.
java.io.IOException: Failed to send RPC RPC 6612597259409757764 to /172.31.35.73:50926: java.nio.channels.ClosedChannelException
at org.apache.spark.network.client.TransportClient$RpcChannelListener.handleFailure(TransportClient.java:363)
at org.apache.spark.network.client.TransportClient$StdChannelListener.operationComplete(TransportClient.java:340)
at io.netty.util.concurrent.DefaultPromise.notifyListener0(DefaultPromise.java:577)
at io.netty.util.concurrent.DefaultPromise.notifyListenersNow(DefaultPromise.java:551)
at io.netty.util.concurrent.DefaultPromise.notifyListeners(DefaultPromise.java:490)
at io.netty.util.concurrent.DefaultPromise.setValue0(DefaultPromise.java:615)
at io.netty.util.concurrent.DefaultPromise.setFailure0(DefaultPromise.java:608)
at io.netty.util.concurrent.DefaultPromise.tryFailure(DefaultPromise.java:117)
at io.netty.channel.AbstractChannel$AbstractUnsafe.safeSetFailure(AbstractChannel.java:993)
at io.netty.channel.AbstractChannel$AbstractUnsafe.write(AbstractChannel.java:865)
at io.netty.channel.DefaultChannelPipeline$HeadContext.write(DefaultChannelPipeline.java:1367)
at io.netty.channel.AbstractChannelHandlerContext.invokeWrite0(AbstractChannelHandlerContext.java:717)
at io.netty.channel.AbstractChannelHandlerContext.invokeWriteAndFlush(AbstractChannelHandlerContext.java:764)
at io.netty.channel.AbstractChannelHandlerContext$WriteTask.run(AbstractChannelHandlerContext.java:1104)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.nio.channels.ClosedChannelException
at io.netty.channel.AbstractChannel$AbstractUnsafe.newClosedChannelException(AbstractChannel.java:957)
... 12 more
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment