Last active
February 23, 2018 02:11
-
-
Save ppkn/919709d14e3723f2f59e236c9c50f5c7 to your computer and use it in GitHub Desktop.
elasticsearch error
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Python 3.6.4 |Anaconda, Inc.| (default, Jan 16 2018, 18:10:19) | |
[GCC 7.2.0] on linux | |
Type "help", "copyright", "credits" or "license" for more information. | |
SLF4J: Class path contains multiple SLF4J bindings. | |
SLF4J: Found binding in [jar:file:/home/ubuntu/spark/jars/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: Found binding in [jar:file:/home/ubuntu/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class] | |
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. | |
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] | |
Setting default log level to "WARN". | |
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). | |
18/02/22 23:22:35 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
18/02/22 23:22:42 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException | |
Welcome to | |
____ __ | |
/ __/__ ___ _____/ /__ | |
_\ \/ _ \/ _ `/ __/ '_/ | |
/__ / .__/\_,_/_/ /_/\_\ version 2.2.1 | |
/_/ | |
Using Python version 3.6.4 (default, Jan 16 2018 18:10:19) | |
SparkSession available as 'spark'. | |
>>> csv_lines = sc.textFile("data/example.csv") | |
schema_data = data.map( | |
lambda x: ('ignored_key', {'name': x[0], 'company': x[1], 'title': x[2]}) | |
) | |
schema_data.saveAsNewAPIHadoopFile( | |
path='-', | |
outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat", | |
keyClass="org.apache.hadoop.io.NullWritable", | |
valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", | |
conf={ "es.resource" : "agile_data_science/executives" })>>> data = csv_lines.map(lambda line: line.split(",")) | |
>>> schema_data = data.map( | |
... lambda x: ('ignored_key', {'name': x[0], 'company': x[1], 'title': x[2]}) | |
... ) | |
>>> schema_data.saveAsNewAPIHadoopFile( | |
... path='-', | |
... outputFormatClass="org.elasticsearch.hadoop.mr.EsOutputFormat", | |
... keyClass="org.apache.hadoop.io.NullWritable", | |
... valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", | |
... conf={ "es.resource" : "agile_data_science/executives" }) | |
18/02/22 23:23:23 WARN EsOutputFormat: Speculative execution enabled for reducer - consider disabling it to prevent data corruption | |
18/02/22 23:23:24 ERROR Utils: Aborting task | |
org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: Found unrecoverable error [127.0.0.1:9200] returned Bad Request(400) - Rejecting mapping update to [agile_data_science] as the final mapping would have more than 1 type: [executives, test]; Bailing out.. | |
at org.elasticsearch.hadoop.rest.RestClient.processBulkResponse(RestClient.java:251) | |
at org.elasticsearch.hadoop.rest.RestClient.bulk(RestClient.java:203) | |
at org.elasticsearch.hadoop.rest.RestRepository.tryFlush(RestRepository.java:248) | |
at org.elasticsearch.hadoop.rest.RestRepository.flush(RestRepository.java:270) | |
at org.elasticsearch.hadoop.rest.RestRepository.close(RestRepository.java:295) | |
at org.elasticsearch.hadoop.mr.EsOutputFormat$EsRecordWriter.doClose(EsOutputFormat.java:214) | |
at org.elasticsearch.hadoop.mr.EsOutputFormat$EsRecordWriter.close(EsOutputFormat.java:196) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$$anonfun$4.apply(SparkHadoopMapReduceWriter.scala:155) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$$anonfun$4.apply(SparkHadoopMapReduceWriter.scala:144) | |
at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1371) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.org$apache$spark$internal$io$SparkHadoopMapReduceWriter$$executeTask(SparkHadoopMapReduceWriter.scala:159) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$$anonfun$3.apply(SparkHadoopMapReduceWriter.scala:89) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$$anonfun$3.apply(SparkHadoopMapReduceWriter.scala:88) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:108) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
18/02/22 23:23:24 ERROR SparkHadoopMapReduceWriter: Task attempt_20180222232323_0001_r_000001_0 aborted. | |
18/02/22 23:23:24 ERROR Executor: Exception in task 1.0 in stage 1.0 (TID 2) | |
org.apache.spark.SparkException: Task failed while writing rows | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.org$apache$spark$internal$io$SparkHadoopMapReduceWriter$$executeTask(SparkHadoopMapReduceWriter.scala:178) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$$anonfun$3.apply(SparkHadoopMapReduceWriter.scala:89) | |
at org.apache.spark.internal.io.SparkHadoopMapReduceWriter$$anonfun$3.apply(SparkHadoopMapReduceWriter.scala:88) | |
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) | |
at org.apache.spark.scheduler.Task.run(Task.scala:108) | |
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:338) | |
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) | |
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) | |
at java.lang.Thread.run(Thread.java:748) | |
Caused by: org.elasticsearch.hadoop.rest.EsHadoopInvalidRequest: Found unrecoverable error [127.0.0.1:9200] returned Bad Request(400) - Rejecting mapping update to [agile_data_science] as the final mapping would have more than 1 type: [executives, test]; Bailing out.. | |
... | |
... | |
... | |
... |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
conda install -y python=3.5 | |
Solving environment: failed | |
UnsatisfiableError: The following specifications were found to be in conflict: | |
- pyopenssl -> cryptography[version='>=2.1.4'] -> asn1crypto[version='>=0.21.0'] -> python[version='>=3.6,<3.7.0a0'] | |
- python=3.5 | |
Use "conda info <package>" to see the dependencies for each package. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment