Created
June 1, 2018 14:12
-
-
Save geoHeil/fc634991bf3738eff4a5adbb857fd8b0 to your computer and use it in GitHub Desktop.
geomesa orc stack trace
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
2018-06-01 16:06:48 WARN NativeCodeLoader:62 - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
Main class: | |
Job1 | |
Arguments: | |
Spark config: | |
(spark.app.name,Job1) | |
(spark.submit.deployMode,client) | |
(spark.master,local[2]) | |
(spark.jars,*********(redacted)) | |
Classpath elements: | |
file:/Users/geoheil/Downloads/geomesa-fsds-starter/build/libs/geomesa-fsds-starter-all.jar | |
2018-06-01 16:06:49 INFO SparkContext:54 - Running Spark version 2.3.0 | |
2018-06-01 16:06:49 INFO SparkContext:54 - Submitted application: dummy | |
2018-06-01 16:06:49 INFO SecurityManager:54 - Changing view acls to: geoheil | |
2018-06-01 16:06:49 INFO SecurityManager:54 - Changing modify acls to: geoheil | |
2018-06-01 16:06:49 INFO SecurityManager:54 - Changing view acls groups to: | |
2018-06-01 16:06:49 INFO SecurityManager:54 - Changing modify acls groups to: | |
2018-06-01 16:06:49 INFO SecurityManager:54 - SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(geoheil); groups with view permissions: Set(); users with modify permissions: Set(geoheil); groups with modify permissions: Set() | |
2018-06-01 16:06:49 INFO Utils:54 - Successfully started service 'sparkDriver' on port 56990. | |
2018-06-01 16:06:49 INFO SparkEnv:54 - Registering MapOutputTracker | |
2018-06-01 16:06:49 INFO SparkEnv:54 - Registering BlockManagerMaster | |
2018-06-01 16:06:49 INFO BlockManagerMasterEndpoint:54 - Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information | |
2018-06-01 16:06:49 INFO BlockManagerMasterEndpoint:54 - BlockManagerMasterEndpoint up | |
2018-06-01 16:06:49 INFO DiskBlockManager:54 - Created local directory at /private/var/folders/lz/vlbj1rj12dzbvbmj16jbp0k80000gn/T/blockmgr-f71b7931-8bce-4049-ae4b-9f74036fad7c | |
2018-06-01 16:06:49 INFO MemoryStore:54 - MemoryStore started with capacity 366.3 MB | |
2018-06-01 16:06:49 INFO SparkEnv:54 - Registering OutputCommitCoordinator | |
2018-06-01 16:06:49 INFO log:192 - Logging initialized @2336ms | |
2018-06-01 16:06:49 INFO Server:346 - jetty-9.3.z-SNAPSHOT | |
2018-06-01 16:06:49 INFO Server:414 - Started @2415ms | |
2018-06-01 16:06:49 INFO AbstractConnector:278 - Started ServerConnector@5ea502e0{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} | |
2018-06-01 16:06:49 INFO Utils:54 - Successfully started service 'SparkUI' on port 4040. | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@51a06cbe{/jobs,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@329a1243{/jobs/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@ecf9fb3{/jobs/job,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@27f9e982{/jobs/job/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@4593ff34{/stages,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@37d3d232{/stages/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@30c0ccff{/stages/stage,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2b46a8c1{/stages/stage/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1d572e62{/stages/pool,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@29caf222{/stages/pool/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@46cf05f7{/storage,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@5851bd4f{/storage/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7cd1ac19{/storage/rdd,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2f40a43{/storage/rdd/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3caa4757{/environment,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@69c43e48{/environment/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1804f60d{/executors,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@3a80515c{/executors/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@547e29a4{/executors/threadDump,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1c807b1d{/executors/threadDump/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@238b521e{/static,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@7728643a{/,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@320e400{/api,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2c444798{/jobs/job/kill,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1af7f54a{/stages/stage/kill,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:49 INFO SparkUI:54 - Bound SparkUI to 0.0.0.0, and started at http://192.168.0.144:4040 | |
2018-06-01 16:06:49 INFO SparkContext:54 - Added JAR file:/Users/geoheil/Downloads/geomesa-fsds-starter/build/libs/geomesa-fsds-starter-all.jar at spark://192.168.0.144:56990/jars/geomesa-fsds-starter-all.jar with timestamp 1527862009950 | |
2018-06-01 16:06:50 INFO Executor:54 - Starting executor ID driver on host localhost | |
2018-06-01 16:06:50 INFO Utils:54 - Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 56991. | |
2018-06-01 16:06:50 INFO NettyBlockTransferService:54 - Server created on 192.168.0.144:56991 | |
2018-06-01 16:06:50 INFO BlockManager:54 - Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy | |
2018-06-01 16:06:50 INFO BlockManagerMaster:54 - Registering BlockManager BlockManagerId(driver, 192.168.0.144, 56991, None) | |
2018-06-01 16:06:50 INFO BlockManagerMasterEndpoint:54 - Registering block manager 192.168.0.144:56991 with 366.3 MB RAM, BlockManagerId(driver, 192.168.0.144, 56991, None) | |
2018-06-01 16:06:50 INFO BlockManagerMaster:54 - Registered BlockManager BlockManagerId(driver, 192.168.0.144, 56991, None) | |
2018-06-01 16:06:50 INFO BlockManager:54 - Initialized BlockManager: BlockManagerId(driver, 192.168.0.144, 56991, None) | |
2018-06-01 16:06:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@1b9c1b51{/metrics/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:50 INFO SharedState:54 - Setting hive.metastore.warehouse.dir ('null') to the value of spark.sql.warehouse.dir ('file:/Users/geoheil/Downloads/geomesa-fsds-starter/spark-warehouse/'). | |
2018-06-01 16:06:50 INFO SharedState:54 - Warehouse path is 'file:/Users/geoheil/Downloads/geomesa-fsds-starter/spark-warehouse/'. | |
2018-06-01 16:06:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2ce45a7b{/SQL,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@153d4abb{/SQL/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@2bc9a775{/SQL/execution,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@27b000f7{/SQL/execution/json,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:50 INFO ContextHandler:781 - Started o.s.j.s.ServletContextHandler@29c2c826{/static/sql,null,AVAILABLE,@Spark} | |
2018-06-01 16:06:51 INFO StateStoreCoordinatorRef:54 - Registered StateStoreCoordinator endpoint | |
2018-06-01 16:06:54 INFO CodeGenerator:54 - Code generated in 211.375769 ms | |
2018-06-01 16:06:54 INFO CodeGenerator:54 - Code generated in 24.810019 ms | |
2018-06-01 16:06:54 INFO SparkContext:54 - Starting job: show at Job1.scala:32 | |
2018-06-01 16:06:54 INFO DAGScheduler:54 - Got job 0 (show at Job1.scala:32) with 1 output partitions | |
2018-06-01 16:06:54 INFO DAGScheduler:54 - Final stage: ResultStage 0 (show at Job1.scala:32) | |
2018-06-01 16:06:54 INFO DAGScheduler:54 - Parents of final stage: List() | |
2018-06-01 16:06:54 INFO DAGScheduler:54 - Missing parents: List() | |
2018-06-01 16:06:54 INFO DAGScheduler:54 - Submitting ResultStage 0 (MapPartitionsRDD[4] at show at Job1.scala:32), which has no missing parents | |
2018-06-01 16:06:54 INFO MemoryStore:54 - Block broadcast_0 stored as values in memory (estimated size 9.4 KB, free 366.3 MB) | |
2018-06-01 16:06:54 INFO MemoryStore:54 - Block broadcast_0_piece0 stored as bytes in memory (estimated size 4.5 KB, free 366.3 MB) | |
2018-06-01 16:06:54 INFO BlockManagerInfo:54 - Added broadcast_0_piece0 in memory on 192.168.0.144:56991 (size: 4.5 KB, free: 366.3 MB) | |
2018-06-01 16:06:54 INFO SparkContext:54 - Created broadcast 0 from broadcast at DAGScheduler.scala:1039 | |
2018-06-01 16:06:55 INFO DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[4] at show at Job1.scala:32) (first 15 tasks are for partitions Vector(0)) | |
2018-06-01 16:06:55 INFO TaskSchedulerImpl:54 - Adding task set 0.0 with 1 tasks | |
2018-06-01 16:06:55 INFO TaskSetManager:54 - Starting task 0.0 in stage 0.0 (TID 0, localhost, executor driver, partition 0, PROCESS_LOCAL, 7919 bytes) | |
2018-06-01 16:06:55 INFO Executor:54 - Running task 0.0 in stage 0.0 (TID 0) | |
2018-06-01 16:06:55 INFO Executor:54 - Fetching spark://192.168.0.144:56990/jars/geomesa-fsds-starter-all.jar with timestamp 1527862009950 | |
2018-06-01 16:06:55 INFO TransportClientFactory:267 - Successfully created connection to /192.168.0.144:56990 after 44 ms (0 ms spent in bootstraps) | |
2018-06-01 16:06:55 INFO Utils:54 - Fetching spark://192.168.0.144:56990/jars/geomesa-fsds-starter-all.jar to /private/var/folders/lz/vlbj1rj12dzbvbmj16jbp0k80000gn/T/spark-ea2d0ab4-9e2e-4001-9673-96375229ecc3/userFiles-53172b79-3184-4457-8f03-e65f41986fd3/fetchFileTemp3257654721094779509.tmp | |
2018-06-01 16:06:55 INFO Executor:54 - Adding file:/private/var/folders/lz/vlbj1rj12dzbvbmj16jbp0k80000gn/T/spark-ea2d0ab4-9e2e-4001-9673-96375229ecc3/userFiles-53172b79-3184-4457-8f03-e65f41986fd3/geomesa-fsds-starter-all.jar to class loader | |
2018-06-01 16:06:55 INFO CodeGenerator:54 - Code generated in 9.281096 ms | |
2018-06-01 16:06:55 INFO Executor:54 - Finished task 0.0 in stage 0.0 (TID 0). 1124 bytes result sent to driver | |
2018-06-01 16:06:55 INFO TaskSetManager:54 - Finished task 0.0 in stage 0.0 (TID 0) in 790 ms on localhost (executor driver) (1/1) | |
2018-06-01 16:06:55 INFO TaskSchedulerImpl:54 - Removed TaskSet 0.0, whose tasks have all completed, from pool | |
2018-06-01 16:06:55 INFO DAGScheduler:54 - ResultStage 0 (show at Job1.scala:32) finished in 1.217 s | |
2018-06-01 16:06:55 INFO DAGScheduler:54 - Job 0 finished: show at Job1.scala:32, took 1.266913 s | |
+----------+-----------+ | |
| dtg| geom| | |
+----------+-----------+ | |
|2018-01-01|POINT (1 2)| | |
+----------+-----------+ | |
2018-06-01 16:06:57 INFO ENGINE:? - dataFileCache open start | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 4 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 16 | |
2018-06-01 16:06:57 INFO BlockManagerInfo:54 - Removed broadcast_0_piece0 on 192.168.0.144:56991 in memory (size: 4.5 KB, free: 366.3 MB) | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 2 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 19 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 14 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 20 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 6 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 11 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 23 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 9 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 13 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 18 | |
2018-06-01 16:06:57 INFO ContextCleaner:54 - Cleaned accumulator 22 | |
2018-06-01 16:06:58 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$storage$3$$anonfun$apply$2.apply(FileSystemStorageManager.scala:44) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$storage$3$$anonfun$apply$2.apply(FileSystemStorageManager.scala:44) | |
at scala.Option.flatMap(Option.scala:171) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$storage$3.apply(FileSystemStorageManager.scala:44) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$storage$3.apply(FileSystemStorageManager.scala:44) | |
at scala.Option.orElse(Option.scala:289) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storage(FileSystemStorageManager.scala:44) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createSchema(FileSystemDataStore.scala:49) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createSchema(FileSystemDataStore.scala:27) | |
at Job1$.delayedEndpoint$Job1$1(Job1.scala:47) | |
at Job1$delayedInit$body.apply(Job1.scala:12) | |
at scala.Function0$class.apply$mcV$sp(Function0.scala:34) | |
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.collection.immutable.List.foreach(List.scala:381) | |
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) | |
at scala.App$class.main(App.scala:76) | |
at Job1$.main(Job1.scala:12) | |
at Job1.main(Job1.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 3 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 5 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 26 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 21 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 25 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 10 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 7 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 12 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 8 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 15 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 17 | |
2018-06-01 16:06:59 INFO ContextCleaner:54 - Cleaned accumulator 24 | |
2018-06-01 16:06:59 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at scala.collection.Iterator$class.find(Iterator.scala:943) | |
at scala.collection.AbstractIterator.find(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$storage$4.apply(FileSystemStorageManager.scala:45) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$storage$4.apply(FileSystemStorageManager.scala:45) | |
at scala.Option.orElse(Option.scala:289) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storage(FileSystemStorageManager.scala:45) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createSchema(FileSystemDataStore.scala:49) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createSchema(FileSystemDataStore.scala:27) | |
at Job1$.delayedEndpoint$Job1$1(Job1.scala:47) | |
at Job1$delayedInit$body.apply(Job1.scala:12) | |
at scala.Function0$class.apply$mcV$sp(Function0.scala:34) | |
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.collection.immutable.List.foreach(List.scala:381) | |
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) | |
at scala.App$class.main(App.scala:76) | |
at Job1$.main(Job1.scala:12) | |
at Job1.main(Job1.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2018-06-01 16:06:59 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storages(FileSystemStorageManager.scala:63) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createTypeNames(FileSystemDataStore.scala:42) | |
at org.geotools.data.store.ContentDataStore.getTypeNames(ContentDataStore.java:308) | |
at org.locationtech.geomesa.spark.GeoMesaDataSource.createRelation(GeoMesaSparkSQL.scala:177) | |
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) | |
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267) | |
at Job1$.delayedEndpoint$Job1$1(Job1.scala:54) | |
at Job1$delayedInit$body.apply(Job1.scala:12) | |
at scala.Function0$class.apply$mcV$sp(Function0.scala:34) | |
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.collection.immutable.List.foreach(List.scala:381) | |
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) | |
at scala.App$class.main(App.scala:76) | |
at Job1$.main(Job1.scala:12) | |
at Job1.main(Job1.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2018-06-01 16:06:59 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storages(FileSystemStorageManager.scala:63) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createTypeNames(FileSystemDataStore.scala:42) | |
at org.geotools.data.store.ContentDataStore.entry(ContentDataStore.java:581) | |
at org.geotools.data.store.ContentDataStore.ensureEntry(ContentDataStore.java:617) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:393) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:360) | |
at org.geotools.data.store.ContentDataStore.getSchema(ContentDataStore.java:344) | |
at org.locationtech.geomesa.spark.GeoMesaDataSource.createRelation(GeoMesaSparkSQL.scala:180) | |
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) | |
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267) | |
at Job1$.delayedEndpoint$Job1$1(Job1.scala:54) | |
at Job1$delayedInit$body.apply(Job1.scala:12) | |
at scala.Function0$class.apply$mcV$sp(Function0.scala:34) | |
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.collection.immutable.List.foreach(List.scala:381) | |
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) | |
at scala.App$class.main(App.scala:76) | |
at Job1$.main(Job1.scala:12) | |
at Job1.main(Job1.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2018-06-01 16:06:59 INFO CodeGenerator:54 - Code generated in 21.30678 ms | |
2018-06-01 16:06:59 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storages(FileSystemStorageManager.scala:63) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createTypeNames(FileSystemDataStore.scala:42) | |
at org.geotools.data.store.ContentDataStore.entry(ContentDataStore.java:581) | |
at org.geotools.data.store.ContentDataStore.ensureEntry(ContentDataStore.java:617) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:393) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:360) | |
at org.geotools.data.store.ContentDataStore.getSchema(ContentDataStore.java:344) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider.save(FileSystemRDDProvider.scala:78) | |
at org.locationtech.geomesa.spark.GeoMesaDataSource.createRelation(GeoMesaSparkSQL.scala:206) | |
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) | |
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267) | |
at Job1$.delayedEndpoint$Job1$1(Job1.scala:54) | |
at Job1$delayedInit$body.apply(Job1.scala:12) | |
at scala.Function0$class.apply$mcV$sp(Function0.scala:34) | |
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.collection.immutable.List.foreach(List.scala:381) | |
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) | |
at scala.App$class.main(App.scala:76) | |
at Job1$.main(Job1.scala:12) | |
at Job1.main(Job1.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
2018-06-01 16:06:59 INFO SparkContext:54 - Starting job: foreachPartition at FileSystemRDDProvider.scala:84 | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Got job 1 (foreachPartition at FileSystemRDDProvider.scala:84) with 1 output partitions | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Final stage: ResultStage 1 (foreachPartition at FileSystemRDDProvider.scala:84) | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Parents of final stage: List() | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Missing parents: List() | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Submitting ResultStage 1 (MapPartitionsRDD[9] at mapPartitions at GeoMesaSparkSQL.scala:195), which has no missing parents | |
2018-06-01 16:06:59 INFO MemoryStore:54 - Block broadcast_1 stored as values in memory (estimated size 13.5 KB, free 366.3 MB) | |
2018-06-01 16:06:59 INFO MemoryStore:54 - Block broadcast_1_piece0 stored as bytes in memory (estimated size 6.7 KB, free 366.3 MB) | |
2018-06-01 16:06:59 INFO BlockManagerInfo:54 - Added broadcast_1_piece0 in memory on 192.168.0.144:56991 (size: 6.7 KB, free: 366.3 MB) | |
2018-06-01 16:06:59 INFO SparkContext:54 - Created broadcast 1 from broadcast at DAGScheduler.scala:1039 | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[9] at mapPartitions at GeoMesaSparkSQL.scala:195) (first 15 tasks are for partitions Vector(0)) | |
2018-06-01 16:06:59 INFO TaskSchedulerImpl:54 - Adding task set 1.0 with 1 tasks | |
2018-06-01 16:06:59 INFO TaskSetManager:54 - Starting task 0.0 in stage 1.0 (TID 1, localhost, executor driver, partition 0, PROCESS_LOCAL, 7919 bytes) | |
2018-06-01 16:06:59 INFO Executor:54 - Running task 0.0 in stage 1.0 (TID 1) | |
2018-06-01 16:06:59 INFO CodeGenerator:54 - Code generated in 15.178817 ms | |
2018-06-01 16:06:59 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storages(FileSystemStorageManager.scala:63) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createTypeNames(FileSystemDataStore.scala:42) | |
at org.geotools.data.store.ContentDataStore.entry(ContentDataStore.java:581) | |
at org.geotools.data.store.ContentDataStore.ensureEntry(ContentDataStore.java:617) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:393) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:360) | |
at org.geotools.data.store.ContentDataStore.getSchema(ContentDataStore.java:344) | |
at org.locationtech.geomesa.spark.GeoMesaDataSource$$anonfun$21.apply(GeoMesaSparkSQL.scala:197) | |
at org.locationtech.geomesa.spark.GeoMesaDataSource$$anonfun$21.apply(GeoMesaSparkSQL.scala:195) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:800) | |
at org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$23.apply(RDD.scala:800) | |
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38) | |
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324) | |
at org.apache.spark.rdd.RDD.iterator(RDD.scala:288) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
2018-06-01 16:06:59 WARN ConverterStorageFactory:37 - Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
java.lang.IllegalArgumentException: Must provide either simple feature type config or name | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory$$anonfun$2.apply(ConverterStorageFactory.scala:41) | |
at scala.Option.getOrElse(Option.scala:121) | |
at org.locationtech.geomesa.fs.storage.converter.ConverterStorageFactory.load(ConverterStorageFactory.scala:41) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1$$anonfun$3.apply(FileSystemStorageManager.scala:108) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:109) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath$1.apply(FileSystemStorageManager.scala:107) | |
at org.locationtech.geomesa.utils.stats.MethodProfiling$class.profile(MethodProfiling.scala:19) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.profile(FileSystemStorageManager.scala:29) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.org$locationtech$geomesa$fs$FileSystemStorageManager$$loadPath(FileSystemStorageManager.scala:114) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager$$anonfun$org$locationtech$geomesa$fs$FileSystemStorageManager$$loadAll$2.apply(FileSystemStorageManager.scala:93) | |
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) | |
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.FileSystemStorageManager.storages(FileSystemStorageManager.scala:63) | |
at org.locationtech.geomesa.fs.FileSystemDataStore.createTypeNames(FileSystemDataStore.scala:42) | |
at org.geotools.data.store.ContentDataStore.entry(ContentDataStore.java:581) | |
at org.geotools.data.store.ContentDataStore.ensureEntry(ContentDataStore.java:617) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:393) | |
at org.geotools.data.store.ContentDataStore.getFeatureSource(ContentDataStore.java:376) | |
at org.geotools.data.store.ContentDataStore.ensureFeatureStore(ContentDataStore.java:457) | |
at org.geotools.data.store.ContentDataStore.getFeatureWriterAppend(ContentDataStore.java:489) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:86) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:84) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
2018-06-01 16:06:59 INFO PhysicalFsWriter:88 - ORC writer created for path: fooGeomesaFsds/myDummyData/2018/01/01/3_W38f4db5c229c45c1a079b49e9a70b908.orc with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 262144 | |
2018-06-01 16:06:59 INFO OrcCodecPool:58 - Got brand-new codec ZLIB | |
2018-06-01 16:06:59 INFO WriterImpl:195 - ORC writer created for path: fooGeomesaFsds/myDummyData/2018/01/01/3_W38f4db5c229c45c1a079b49e9a70b908.orc with stripeSize: 67108864 blockSize: 268435456 compression: ZLIB bufferSize: 262144 | |
2018-06-01 16:06:59 ERROR Executor:91 - Exception in task 0.0 in stage 1.0 (TID 1) | |
org.locationtech.geomesa.fs.shaded.com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3934) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1.write(FileSystemFeatureStore.scala:75) | |
at org.geotools.data.store.EventContentFeatureWriter.write(EventContentFeatureWriter.java:125) | |
at org.geotools.data.InProcessLockingManager$1.write(InProcessLockingManager.java:337) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:90) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:88) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:88) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:84) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemWriter.<init>(OrcFileSystemWriter.scala:26) | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemStorage.createWriter(OrcFileSystemStorage.scala:36) | |
at org.locationtech.geomesa.fs.storage.common.MetadataFileSystemStorage.getWriter(MetadataFileSystemStorage.scala:72) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:58) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:57) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) | |
... 22 more | |
2018-06-01 16:06:59 WARN TaskSetManager:66 - Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.locationtech.geomesa.fs.shaded.com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3934) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1.write(FileSystemFeatureStore.scala:75) | |
at org.geotools.data.store.EventContentFeatureWriter.write(EventContentFeatureWriter.java:125) | |
at org.geotools.data.InProcessLockingManager$1.write(InProcessLockingManager.java:337) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:90) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:88) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:88) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:84) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemWriter.<init>(OrcFileSystemWriter.scala:26) | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemStorage.createWriter(OrcFileSystemStorage.scala:36) | |
at org.locationtech.geomesa.fs.storage.common.MetadataFileSystemStorage.getWriter(MetadataFileSystemStorage.scala:72) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:58) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:57) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) | |
... 22 more | |
2018-06-01 16:06:59 ERROR TaskSetManager:70 - Task 0 in stage 1.0 failed 1 times; aborting job | |
2018-06-01 16:06:59 INFO TaskSchedulerImpl:54 - Removed TaskSet 1.0, whose tasks have all completed, from pool | |
2018-06-01 16:06:59 INFO TaskSchedulerImpl:54 - Cancelling stage 1 | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - ResultStage 1 (foreachPartition at FileSystemRDDProvider.scala:84) failed in 0.412 s due to Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.locationtech.geomesa.fs.shaded.com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3934) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1.write(FileSystemFeatureStore.scala:75) | |
at org.geotools.data.store.EventContentFeatureWriter.write(EventContentFeatureWriter.java:125) | |
at org.geotools.data.InProcessLockingManager$1.write(InProcessLockingManager.java:337) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:90) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:88) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:88) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:84) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemWriter.<init>(OrcFileSystemWriter.scala:26) | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemStorage.createWriter(OrcFileSystemStorage.scala:36) | |
at org.locationtech.geomesa.fs.storage.common.MetadataFileSystemStorage.getWriter(MetadataFileSystemStorage.scala:72) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:58) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:57) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) | |
... 22 more | |
Driver stacktrace: | |
2018-06-01 16:06:59 INFO DAGScheduler:54 - Job 1 failed: foreachPartition at FileSystemRDDProvider.scala:84, took 0.418316 s | |
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 1 times, most recent failure: Lost task 0.0 in stage 1.0 (TID 1, localhost, executor driver): org.locationtech.geomesa.fs.shaded.com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3934) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1.write(FileSystemFeatureStore.scala:75) | |
at org.geotools.data.store.EventContentFeatureWriter.write(EventContentFeatureWriter.java:125) | |
at org.geotools.data.InProcessLockingManager$1.write(InProcessLockingManager.java:337) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:90) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:88) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:88) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:84) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemWriter.<init>(OrcFileSystemWriter.scala:26) | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemStorage.createWriter(OrcFileSystemStorage.scala:36) | |
at org.locationtech.geomesa.fs.storage.common.MetadataFileSystemStorage.getWriter(MetadataFileSystemStorage.scala:72) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:58) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:57) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) | |
... 22 more | |
Driver stacktrace: | |
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1599) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1587) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1586) | |
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) | |
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48) | |
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1586) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) | |
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:831) | |
at scala.Option.foreach(Option.scala:257) | |
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:831) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1820) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1769) | |
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1758) | |
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48) | |
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:642) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2027) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2048) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext.runJob(SparkContext.scala:2092) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:927) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112) | |
at org.apache.spark.rdd.RDD.withScope(RDD.scala:363) | |
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:927) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider.save(FileSystemRDDProvider.scala:84) | |
at org.locationtech.geomesa.spark.GeoMesaDataSource.createRelation(GeoMesaSparkSQL.scala:206) | |
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) | |
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:155) | |
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151) | |
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152) | |
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:80) | |
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:80) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:77) | |
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:654) | |
at org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:273) | |
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:267) | |
at Job1$.delayedEndpoint$Job1$1(Job1.scala:54) | |
at Job1$delayedInit$body.apply(Job1.scala:12) | |
at scala.Function0$class.apply$mcV$sp(Function0.scala:34) | |
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.App$$anonfun$main$1.apply(App.scala:76) | |
at scala.collection.immutable.List.foreach(List.scala:381) | |
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35) | |
at scala.App$class.main(App.scala:76) | |
at Job1$.main(Job1.scala:12) | |
at Job1.main(Job1.scala) | |
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) | |
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) | |
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) | |
at java.lang.reflect.Method.invoke(Method.java:498) | |
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52) | |
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:879) | |
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:197) | |
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:227) | |
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:136) | |
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala) | |
Caused by: org.locationtech.geomesa.fs.shaded.com.google.common.util.concurrent.ExecutionError: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2199) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.get(LocalCache.java:3934) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3938) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LocalLoadingCache.get(LocalCache.java:4821) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1.write(FileSystemFeatureStore.scala:75) | |
at org.geotools.data.store.EventContentFeatureWriter.write(EventContentFeatureWriter.java:125) | |
at org.geotools.data.InProcessLockingManager$1.write(InProcessLockingManager.java:337) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:90) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2$$anonfun$apply$1.apply(FileSystemRDDProvider.scala:88) | |
at scala.collection.Iterator$class.foreach(Iterator.scala:893) | |
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:88) | |
at org.locationtech.geomesa.fs.spark.FileSystemRDDProvider$$anonfun$save$2.apply(FileSystemRDDProvider.scala:84) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:929) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2067) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:109) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: java.lang.NoSuchMethodError: org.apache.orc.TypeDescription.createRowBatch()Lorg/apache/hadoop/hive/ql/exec/vector/VectorizedRowBatch; | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemWriter.<init>(OrcFileSystemWriter.scala:26) | |
at org.locationtech.geomesa.fs.storage.orc.OrcFileSystemStorage.createWriter(OrcFileSystemStorage.scala:36) | |
at org.locationtech.geomesa.fs.storage.common.MetadataFileSystemStorage.getWriter(MetadataFileSystemStorage.scala:72) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:58) | |
at org.locationtech.geomesa.fs.FileSystemFeatureStore$$anon$1$$anon$3.load(FileSystemFeatureStore.scala:57) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3524) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2317) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2280) | |
at org.locationtech.geomesa.fs.shaded.com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2195) | |
... 22 more | |
2018-06-01 16:06:59 INFO SparkContext:54 - Invoking stop() from shutdown hook | |
2018-06-01 16:06:59 INFO AbstractConnector:318 - Stopped Spark@5ea502e0{HTTP/1.1,[http/1.1]}{0.0.0.0:4040} | |
2018-06-01 16:06:59 INFO SparkUI:54 - Stopped Spark web UI at http://192.168.0.144:4040 | |
2018-06-01 16:06:59 INFO MapOutputTrackerMasterEndpoint:54 - MapOutputTrackerMasterEndpoint stopped! | |
2018-06-01 16:06:59 INFO MemoryStore:54 - MemoryStore cleared | |
2018-06-01 16:06:59 INFO BlockManager:54 - BlockManager stopped | |
2018-06-01 16:06:59 INFO BlockManagerMaster:54 - BlockManagerMaster stopped | |
2018-06-01 16:06:59 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint:54 - OutputCommitCoordinator stopped! | |
2018-06-01 16:06:59 INFO SparkContext:54 - Successfully stopped SparkContext | |
2018-06-01 16:06:59 INFO ShutdownHookManager:54 - Shutdown hook called | |
2018-06-01 16:06:59 INFO ShutdownHookManager:54 - Deleting directory /private/var/folders/lz/vlbj1rj12dzbvbmj16jbp0k80000gn/T/spark-ea2d0ab4-9e2e-4001-9673-96375229ecc3 | |
2018-06-01 16:06:59 INFO ShutdownHookManager:54 - Deleting directory /private/var/folders/lz/vlbj1rj12dzbvbmj16jbp0k80000gn/T/spark-2e769758-eab9-4997-9af1-5e659479fcaa | |
make: *** [run] Error 1 | |
geoheil@geoheils-MacBook ~/Downloads/geomesa-fsds-starter [16:07:00] | |
> $ |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi there! I am trying to manage geomesa with File System DataStore and I am using an isolated HDFS/Spark infrastructure, so I have access there only by some endpoints.
To this infrastructure I am able to "throw" my spark application as a jar and run it. So, I can add jars like geomesa-fs-spark and geomesa-fs-spark-runtime and I am trying to run the example in the FileSystem DataStore section.
So far, I get the exact same error as you get at line 126 ( ConverterStorageFactory: Couldn't create converter storage: java.lang.IllegalArgumentException: Must provide either simple feature type config or name ).
You are the only one I've found with the same error and I would like to discuss it with you if you've managed to tackle it down.
I don't know if it is a relevant question, but, how can I configure a converter programmatically? Is it possible or do I need to keep it somewhere in the fs?
Thanks in advance!