Last active
January 9, 2019 13:12
-
-
Save skliarpawlo/dd2540315dc9a2f8978b3e1dfaa2bc01 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[1m============================= test session starts ==============================[0m | |
platform darwin -- Python 3.5.2, pytest-3.2.3, py-1.7.0, pluggy-0.4.0 -- /usr/local/bin/python3.5 | |
cachedir: ../.cache | |
rootdir: /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__, inifile: tox.ini | |
[1mcollecting ... [0mINFO:dd.datadogpy:No agent or invalid configuration file found | |
collected 63 items | |
tests/integration/test_activities_merger.py::TestActivitiesMergerWorkflow::test_abort_with_corrupted_delta INFO:root:Starting the new global session for <class 'castor.TestCastorSession'> | |
Warning: Ignoring non-spark config property: es.nodes.wan.only=true | |
https://nexus.tubularlabs.net/repository/libs-release-local/ added as a remote repository with the name: repo-1 | |
Ivy Default Cache set to: /Users/pavloskliar/.ivy2/cache | |
The jars for the packages stored in: /Users/pavloskliar/.ivy2/jars | |
:: loading settings :: url = jar:file:/private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/external/pypi__pyspark_2_2_0/pyspark/jars/ivy-2.4.0.jar!/org/apache/ivy/core/settings/ivysettings.xml | |
org.apache.spark#spark-streaming-kafka-0-8_2.11 added as a dependency | |
org.apache.spark#spark-sql-kafka-0-10_2.11 added as a dependency | |
org.elasticsearch#elasticsearch-spark-20_2.11 added as a dependency | |
com.tubularlabs#confluent-spark-avro_2.11 added as a dependency | |
datastax#spark-cassandra-connector added as a dependency | |
mysql#mysql-connector-java added as a dependency | |
:: resolving dependencies :: org.apache.spark#spark-submit-parent;1.0 | |
confs: [default] | |
WARNING: An illegal reflective access operation has occurred | |
WARNING: Illegal reflective access by org.apache.ivy.util.url.IvyAuthenticator (file:/private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/external/pypi__pyspark_2_2_0/pyspark/jars/ivy-2.4.0.jar) to field java.net.Authenticator.theAuthenticator | |
WARNING: Please consider reporting this to the maintainers of org.apache.ivy.util.url.IvyAuthenticator | |
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations | |
WARNING: All illegal access operations will be denied in a future release | |
found org.apache.spark#spark-streaming-kafka-0-8_2.11;2.1.0 in central | |
found org.apache.kafka#kafka_2.11;0.8.2.1 in central | |
found org.scala-lang.modules#scala-xml_2.11;1.0.2 in central | |
found com.yammer.metrics#metrics-core;2.2.0 in central | |
found org.slf4j#slf4j-api;1.7.16 in central | |
found org.scala-lang.modules#scala-parser-combinators_2.11;1.0.2 in central | |
found com.101tec#zkclient;0.3 in central | |
found log4j#log4j;1.2.17 in central | |
found org.apache.kafka#kafka-clients;0.8.2.1 in central | |
found net.jpountz.lz4#lz4;1.3.0 in central | |
found org.xerial.snappy#snappy-java;1.1.2.6 in central | |
found org.apache.spark#spark-tags_2.11;2.1.0 in central | |
found org.scalatest#scalatest_2.11;2.2.6 in central | |
found org.scala-lang#scala-reflect;2.11.8 in central | |
found org.spark-project.spark#unused;1.0.0 in central | |
found org.apache.spark#spark-sql-kafka-0-10_2.11;2.1.1 in central | |
found org.apache.kafka#kafka-clients;0.10.0.1 in central | |
found org.apache.spark#spark-tags_2.11;2.1.1 in central | |
found org.elasticsearch#elasticsearch-spark-20_2.11;6.0.0 in central | |
found com.tubularlabs#confluent-spark-avro_2.11;1.2.1 in repo-1 | |
found datastax#spark-cassandra-connector;2.0.0-M2-s_2.11 in spark-packages | |
found commons-beanutils#commons-beanutils;1.8.0 in central | |
found org.joda#joda-convert;1.2 in central | |
found joda-time#joda-time;2.3 in central | |
found io.netty#netty-all;4.0.33.Final in central | |
found com.twitter#jsr166e;1.1.0 in central | |
found mysql#mysql-connector-java;5.1.39 in central | |
:: resolution report :: resolve 6997ms :: artifacts dl 11ms | |
:: modules in use: | |
com.101tec#zkclient;0.3 from central in [default] | |
com.tubularlabs#confluent-spark-avro_2.11;1.2.1 from repo-1 in [default] | |
com.twitter#jsr166e;1.1.0 from central in [default] | |
com.yammer.metrics#metrics-core;2.2.0 from central in [default] | |
commons-beanutils#commons-beanutils;1.8.0 from central in [default] | |
datastax#spark-cassandra-connector;2.0.0-M2-s_2.11 from spark-packages in [default] | |
io.netty#netty-all;4.0.33.Final from central in [default] | |
joda-time#joda-time;2.3 from central in [default] | |
log4j#log4j;1.2.17 from central in [default] | |
mysql#mysql-connector-java;5.1.39 from central in [default] | |
net.jpountz.lz4#lz4;1.3.0 from central in [default] | |
org.apache.kafka#kafka-clients;0.10.0.1 from central in [default] | |
org.apache.kafka#kafka_2.11;0.8.2.1 from central in [default] | |
org.apache.spark#spark-sql-kafka-0-10_2.11;2.1.1 from central in [default] | |
org.apache.spark#spark-streaming-kafka-0-8_2.11;2.1.0 from central in [default] | |
org.apache.spark#spark-tags_2.11;2.1.1 from central in [default] | |
org.elasticsearch#elasticsearch-spark-20_2.11;6.0.0 from central in [default] | |
org.joda#joda-convert;1.2 from central in [default] | |
org.scala-lang#scala-reflect;2.11.8 from central in [default] | |
org.scala-lang.modules#scala-parser-combinators_2.11;1.0.2 from central in [default] | |
org.scala-lang.modules#scala-xml_2.11;1.0.2 from central in [default] | |
org.slf4j#slf4j-api;1.7.16 from central in [default] | |
org.spark-project.spark#unused;1.0.0 from central in [default] | |
org.xerial.snappy#snappy-java;1.1.2.6 from central in [default] | |
:: evicted modules: | |
org.apache.kafka#kafka-clients;0.8.2.1 by [org.apache.kafka#kafka-clients;0.10.0.1] in [default] | |
org.apache.spark#spark-tags_2.11;2.1.0 by [org.apache.spark#spark-tags_2.11;2.1.1] in [default] | |
org.scalatest#scalatest_2.11;2.2.6 transitively in [default] | |
--------------------------------------------------------------------- | |
| | modules || artifacts | | |
| conf | number| search|dwnlded|evicted|| number|dwnlded| | |
--------------------------------------------------------------------- | |
| default | 27 | 4 | 4 | 3 || 24 | 0 | | |
--------------------------------------------------------------------- | |
:: retrieving :: org.apache.spark#spark-submit-parent | |
confs: [default] | |
0 artifacts copied, 24 already retrieved (0kB/8ms) | |
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties | |
Setting default log level to "WARN". | |
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel). | |
19/01/09 13:07:04 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable | |
INFO:swissarmy.testutils.dockerfixture:Setting up the following docker fixtures for testing: ['elasticsearch.docker', 'mysql.docker', 'kafka.docker', 'zookeeper.docker'] with project: castor_5ddeo | |
INFO:swissarmy.testutils.dockerfixture: | |
INFO:swissarmy.testutils.dockerfixture:Docker compose is starting containers for fixtures | |
INFO:swissarmy.testutils.dockerfixture:Using health checking (compose v2.1) to verify health | |
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'elasticsearch.docker' | grep '(healthy)'` to verify health | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container isn't healthy yet, retrying... | |
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (elasticsearch.docker) is healthy. | |
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'mysql.docker' | grep '(healthy)'` to verify health | |
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (mysql.docker) is healthy. | |
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'kafka.docker' | grep '(healthy)'` to verify health | |
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (kafka.docker) is healthy. | |
INFO:swissarmy.testutils.dockerfixture:Calling `docker-compose -p castor_5ddeo -f /private/var/tmp/_bazel_pavloskliar/4ebdf5e551906a3d6bd81a9029c30c48/execroot/__main__/bazel-out/darwin-fastbuild/bin/tool_castor/castor_integration_tests.runfiles/__main__/tool_castor/tests/integration/../../docker-compose.yml ps| grep 'zookeeper.docker' | grep '(healthy)'` to verify health | |
INFO:swissarmy.testutils.dockerfixture:Container for test fixture (zookeeper.docker) is healthy. | |
INFO:swissarmy.testutils.dockerfixture:Successfully set up fixtures, starting tests. | |
[31mFAILED[0m | |
tests/integration/test_activities_merger.py::TestActivitiesMergerWorkflow::test_workflow [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_base INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_consecutive_runs [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_data_replace [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_future_process_to [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_incorrect_csv_schema [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_init_from_passed [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_no_table [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_recovery_checkpoint_prop_passed [31mFAILED[0m | |
tests/integration/test_barley_mash_merger.py::TestBarleyMashMergerWorkflow::test_specified_checkpoint [31mFAILED[0m | |
tests/integration/test_cassandra_loader.py::TestCassandraLoader::test_init_from_args [32mPASSED[0m | |
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow_no_merging [31mFAILED[0m | |
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow_with_array_unpack [31mFAILED[0m | |
tests/integration/test_databus_merger.py::TestDatabusMergerWorkflow::test_workflow_with_corrupted_delta [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_base INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_checkpoints_for_different_topics [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_empty_delta [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_empty_delta_after_filter [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_recovery_checkpoint [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_recovery_checkpoint_and_step [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusMergerWorkflow::test_recovery_checkpoint_disabled [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusS3DeltaSource::test_include_last INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusS3DeltaSource::test_include_last_and_overlap [31mFAILED[0m | |
tests/integration/test_databus_s3_merger.py::TestDatabusS3DeltaSource::test_overlap [31mFAILED[0m | |
tests/integration/test_elastic_loader.py::TestElasticLoader::test_simple INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
INFO:castor.loader:Start exporting from elastic://elasticsearch.docker:60200/castor_test/test?es.read.field.as.array.include=topics | |
[31mFAILED[0m | |
tests/integration/test_elastic_merger.py::TestElasticMerge::test_elastic_merge_workflow INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_calculate_unprocessed_parts INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_delta_df [31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_multiple_topics_offsets [31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_table_offsets [31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_get_topic_offsets [31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_kafka_merge_init_from [31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_kafka_merge_workflow [31mFAILED[0m | |
tests/integration/test_kafka_merger.py::TestKafkaMerger::test_merge_to_non_default_db [31mFAILED[0m | |
tests/integration/test_loader_partitioned.py::TestLoaderPartitioned::test_load_to_non_default_database INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_loader_partitioned.py::TestLoaderPartitioned::test_partitioning [31mFAILED[0m | |
tests/integration/test_mysql_loader.py::TestMysqlLoader::test_loader INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
INFO:castor.loader:Start exporting from mysql://mysql.docker:60306/castor_test/loader?user=root&password= | |
[31mFAILED[0m | |
tests/integration/test_quantum_merger.py::TestQuantumMerger::test_kafka_merge_workflow INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net | |
[31mFAILED[0m | |
tests/integration/test_quantum_merger.py::TestQuantumMerger::test_kafka_merge_workflow [31mERROR[0m | |
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_add_field INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net | |
[31mFAILED[0m | |
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_add_field_modify_schema INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net | |
[31mFAILED[0m | |
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_delete_field INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net | |
[31mFAILED[0m | |
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_remove_field_modify_schema INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net | |
[31mFAILED[0m | |
tests/integration/test_schema_evolution.py::TestAvroSchemaEvolution::test_rename_field INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): stage.sr.tubularlabs.net | |
[31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_account_level_values_per_segment INFO:root:Reusing the global session for <class 'castor.TestCastorSession'> | |
[31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_account_level_values_per_segment [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_chunked [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_chunked [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_whole [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_double_ticks_whole [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_chunked [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_chunked [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_whole [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_ascii_trapezoid_single_ticks_whole [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_bucketing_gids [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_bucketing_gids [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_old_streams_ignored [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_old_streams_ignored [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_primary_fields [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_primary_fields [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_replace_merge_works [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_replace_merge_works [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch_none_game [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_and_viewers_on_game_switch_none_game [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_same_game [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_same_fetch_time_same_game [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_seconds_watched_for_unequal_time_ranges_between_measurements [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_seconds_watched_for_unequal_time_ranges_between_measurements [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_segment_and_stream_duration [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_segment_and_stream_duration [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly_when_game_is_null [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_is_segmented_properly_when_game_is_null [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_publish_date_fields [31mFAILED[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_stream_publish_date_fields [31mERROR[0m | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_viewers_fields [31mFAILED[0mStopping castor_5ddeo_kafka.docker_1 ... | |
Stopping castor_5ddeo_zookeeper.docker_1 ... | |
Stopping castor_5ddeo_mysql.docker_1 ... | |
Stopping castor_5ddeo_elasticsearch.docker_1 ... | |
[1A[2K | |
Stopping castor_5ddeo_elasticsearch.docker_1 ... [32mdone[0m | |
[1B[2A[2K | |
Stopping castor_5ddeo_mysql.docker_1 ... [32mdone[0m | |
[2B[4A[2K | |
Stopping castor_5ddeo_kafka.docker_1 ... [32mdone[0m | |
[4B[3A[2K | |
Stopping castor_5ddeo_zookeeper.docker_1 ... [32mdone[0m | |
[3BRemoving castor_5ddeo_kafka.docker_1 ... | |
Removing castor_5ddeo_zookeeper.docker_1 ... | |
Removing castor_5ddeo_mysql.docker_1 ... | |
Removing castor_5ddeo_elasticsearch.docker_1 ... | |
[1A[2K | |
Removing castor_5ddeo_elasticsearch.docker_1 ... [32mdone[0m | |
[1B[3A[2K | |
Removing castor_5ddeo_zookeeper.docker_1 ... [32mdone[0m | |
[3B[2A[2K | |
Removing castor_5ddeo_mysql.docker_1 ... [32mdone[0m | |
[2B[4A[2K | |
Removing castor_5ddeo_kafka.docker_1 ... [32mdone[0m | |
[4BRemoving network castor_5ddeo_default | |
tests/integration/test_twitch_streams_merger.py::TestTwitchStreamMerger::test_viewers_fields [31mERROR[0m | |
==================================== ERRORS ==================================== | |
_______ ERROR at teardown of TestQuantumMerger.test_kafka_merge_workflow _______ | |
a = ('xro856', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro856' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_quantum_merger.TestQuantumMerger testMethod=test_kafka_merge_workflow> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.sql('DROP TABLE IF EXISTS {}'.format(self.table_name))[0m | |
[1m[31mtests/integration/test_quantum_merger.py[0m:33: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro856', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_account_level_values_per_segment | |
a = ('xro1015', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1015' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_account_level_values_per_segment> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1015', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_chunked | |
a = ('xro1069', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1069' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_chunked> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1069', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_whole | |
a = ('xro1123', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1123' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_whole> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1123', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_single_ticks_chunked | |
a = ('xro1177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1177' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_single_ticks_chunked> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_ascii_trapezoid_single_ticks_whole | |
a = ('xro1231', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1231' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_single_ticks_whole> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1231', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_______ ERROR at teardown of TestTwitchStreamMerger.test_bucketing_gids ________ | |
a = ('xro1285', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1285' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_bucketing_gids> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1285', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_____ ERROR at teardown of TestTwitchStreamMerger.test_old_streams_ignored _____ | |
a = ('xro1338', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1338' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_old_streams_ignored> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1338', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_______ ERROR at teardown of TestTwitchStreamMerger.test_primary_fields ________ | |
a = ('xro1391', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1391' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_primary_fields> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1391', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_____ ERROR at teardown of TestTwitchStreamMerger.test_replace_merge_works _____ | |
a = ('xro1444', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1444' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_replace_merge_works> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1444', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_same_fetch_time_and_viewers_on_game_switch | |
a = ('xro1497', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1497' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_same_fetch_time_and_viewers_on_game_switch> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1497', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_same_fetch_time_and_viewers_on_game_switch_none_game | |
a = ('xro1550', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1550' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_same_fetch_time_and_viewers_on_game_switch_none_game> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1550', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
__ ERROR at teardown of TestTwitchStreamMerger.test_same_fetch_time_same_game __ | |
a = ('xro1602', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1602' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_same_fetch_time_same_game> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1602', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_seconds_watched_for_unequal_time_ranges_between_measurements | |
a = ('xro1654', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1654' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_seconds_watched_for_unequal_time_ranges_between_measurements> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1654', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_ ERROR at teardown of TestTwitchStreamMerger.test_segment_and_stream_duration _ | |
a = ('xro1706', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1706' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_segment_and_stream_duration> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1706', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_stream_is_segmented_properly _ | |
a = ('xro1758', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1758' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_stream_is_segmented_properly> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1758', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
ERROR at teardown of TestTwitchStreamMerger.test_stream_is_segmented_properly_when_game_is_null | |
a = ('xro1810', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1810' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_stream_is_segmented_properly_when_game_is_null> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1810', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_ ERROR at teardown of TestTwitchStreamMerger.test_stream_publish_date_fields __ | |
a = ('xro1862', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1862' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_stream_publish_date_fields> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1862', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
_______ ERROR at teardown of TestTwitchStreamMerger.test_viewers_fields ________ | |
a = ('xro1914', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1914' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at jdk.internal.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 16 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 33 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 38 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_viewers_fields> | |
[1m def tearDown(self):[0m | |
[1m> self.spark.catalog_ext.drop_table(self.TABLE)[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:105: in drop_table | |
[1m '{} {}'.format(drop_statement, table_name)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1914', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
=================================== FAILURES =================================== | |
[1m[31m_________ TestActivitiesMergerWorkflow.test_abort_with_corrupted_delta _________[0m | |
self = <tests.integration.test_activities_merger.TestActivitiesMergerWorkflow testMethod=test_abort_with_corrupted_delta> | |
[1m def setUp(self):[0m | |
[1m super(TestActivitiesMergerWorkflow, self).setUp()[0m | |
[1m [0m | |
[1m install_databus_reader_and_writer()[0m | |
[1m self.table_name = 'test_activities'[0m | |
[1m self.topic = 'test.castor.activities.r1'[0m | |
[1m self.schemas_root = absolute_path(__file__, 'resources', 'databus')[0m | |
[1m [0m | |
[1m # publish schema to schema-registry[0m | |
[1m sr_remote = RemoteSchemaRegistryClient(self.sr_host)[0m | |
[1m idl = IDLUtils(sr_remote)[0m | |
[1m> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)[0m | |
[1m[31mtests/integration/test_activities_merger.py[0m:41: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:69: in __init__ | |
[1m self._cache = self.discover_schemas(base_path, idl)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:137: in discover_schemas | |
[1m schema = idl.to_json(schema_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:545: in to_json | |
[1m topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:500: in idl_to_topic_schemas | |
[1m self._locate_avrotools(),[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
cls = <class 'avroplane.fs.IDLUtils'> | |
[1m @classmethod[0m | |
[1m def _locate_avrotools(cls):[0m | |
[1m """Tried different places to find avroplane-tool.jar."""[0m | |
[1m paths = [[0m | |
[1m '/opt/tubular/lib/avro-tools.jar',[0m | |
[1m os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),[0m | |
[1m ][0m | |
[1m for path in paths:[0m | |
[1m if os.path.exists(path):[0m | |
[1m return path[0m | |
[1m [0m | |
[1m raise NotImplementedError('Cannot find avro-tools.jar archive locally '[0m | |
[1m 'and failed to download. '[0m | |
[1m> 'Expected location is one of: {}'.format(paths))[0m | |
[1m[31mE NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar'][0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:631: NotImplementedError | |
[1m[31m__________________ TestActivitiesMergerWorkflow.test_workflow __________________[0m | |
self = <tests.integration.test_activities_merger.TestActivitiesMergerWorkflow testMethod=test_workflow> | |
[1m def setUp(self):[0m | |
[1m super(TestActivitiesMergerWorkflow, self).setUp()[0m | |
[1m [0m | |
[1m install_databus_reader_and_writer()[0m | |
[1m self.table_name = 'test_activities'[0m | |
[1m self.topic = 'test.castor.activities.r1'[0m | |
[1m self.schemas_root = absolute_path(__file__, 'resources', 'databus')[0m | |
[1m [0m | |
[1m # publish schema to schema-registry[0m | |
[1m sr_remote = RemoteSchemaRegistryClient(self.sr_host)[0m | |
[1m idl = IDLUtils(sr_remote)[0m | |
[1m> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)[0m | |
[1m[31mtests/integration/test_activities_merger.py[0m:41: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:69: in __init__ | |
[1m self._cache = self.discover_schemas(base_path, idl)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:137: in discover_schemas | |
[1m schema = idl.to_json(schema_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:545: in to_json | |
[1m topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:500: in idl_to_topic_schemas | |
[1m self._locate_avrotools(),[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
cls = <class 'avroplane.fs.IDLUtils'> | |
[1m @classmethod[0m | |
[1m def _locate_avrotools(cls):[0m | |
[1m """Tried different places to find avroplane-tool.jar."""[0m | |
[1m paths = [[0m | |
[1m '/opt/tubular/lib/avro-tools.jar',[0m | |
[1m os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),[0m | |
[1m ][0m | |
[1m for path in paths:[0m | |
[1m if os.path.exists(path):[0m | |
[1m return path[0m | |
[1m [0m | |
[1m raise NotImplementedError('Cannot find avro-tools.jar archive locally '[0m | |
[1m 'and failed to download. '[0m | |
[1m> 'Expected location is one of: {}'.format(paths))[0m | |
[1m[31mE NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar'][0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:631: NotImplementedError | |
[1m[31m____________________ TestBarleyMashMergerWorkflow.test_base ____________________[0m | |
a = ('xro41', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro41' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_base> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro41', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m______________ TestBarleyMashMergerWorkflow.test_consecutive_runs ______________[0m | |
a = ('xro75', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro75' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_consecutive_runs> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro75', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m________________ TestBarleyMashMergerWorkflow.test_data_replace ________________[0m | |
a = ('xro109', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro109' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_data_replace> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro109', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_____________ TestBarleyMashMergerWorkflow.test_future_process_to ______________[0m | |
a = ('xro143', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro143' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_future_process_to> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro143', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________ TestBarleyMashMergerWorkflow.test_incorrect_csv_schema ____________[0m | |
a = ('xro177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro177' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_incorrect_csv_schema> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro177', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m______________ TestBarleyMashMergerWorkflow.test_init_from_passed ______________[0m | |
a = ('xro211', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro211' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_init_from_passed> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro211', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestBarleyMashMergerWorkflow.test_no_table __________________[0m | |
a = ('xro245', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro245' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_no_table> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro245', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m______ TestBarleyMashMergerWorkflow.test_recovery_checkpoint_prop_passed _______[0m | |
a = ('xro279', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro279' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_recovery_checkpoint_prop_passed> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro279', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________ TestBarleyMashMergerWorkflow.test_specified_checkpoint ____________[0m | |
a = ('xro313', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro313' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_barley_mash_merger.TestBarleyMashMergerWorkflow testMethod=test_specified_checkpoint> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m output_table_schema = ([0m | |
[1m 'date:string,'[0m | |
[1m 'domain:string,'[0m | |
[1m 'device_id:string,'[0m | |
[1m 'url:string,'[0m | |
[1m 'click_type:string,'[0m | |
[1m 'device_platform:string,'[0m | |
[1m 'country_code:string,'[0m | |
[1m 'gender:string,'[0m | |
[1m 'age:string,'[0m | |
[1m 'timestamp:timestamp,'[0m | |
[1m 'time_spent:long'[0m | |
[1m )[0m | |
[1m [0m | |
[1m warehouse_path = tempfile.mkdtemp()[0m | |
[1m> empty_df = self.spark.createDataFrame([], output_table_schema)[0m | |
[1m[31mtests/integration/test_barley_mash_merger.py[0m:42: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro313', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m___________________ TestDatabusMergerWorkflow.test_workflow ____________________[0m | |
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m [0m | |
[1m install_databus_reader_and_writer()[0m | |
[1m self.table_name = 'test_castor_databus'[0m | |
[1m self.topic = 'test.castor.databus.r2'[0m | |
[1m self.topic_no_merge = 'test.castor.databus.no.merge.r2'[0m | |
[1m self.topic_arrays = 'test.castor.databus.arrays.r2'[0m | |
[1m self.schemas_root = absolute_path(__file__, 'resources', 'databus')[0m | |
[1m [0m | |
[1m # publish schema to schema-registry[0m | |
[1m sr_remote = RemoteSchemaRegistryClient(self.sr_host)[0m | |
[1m idl = IDLUtils(sr_remote)[0m | |
[1m> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)[0m | |
[1m[31mtests/integration/test_databus_merger.py[0m:41: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:69: in __init__ | |
[1m self._cache = self.discover_schemas(base_path, idl)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:137: in discover_schemas | |
[1m schema = idl.to_json(schema_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:545: in to_json | |
[1m topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:500: in idl_to_topic_schemas | |
[1m self._locate_avrotools(),[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
cls = <class 'avroplane.fs.IDLUtils'> | |
[1m @classmethod[0m | |
[1m def _locate_avrotools(cls):[0m | |
[1m """Tried different places to find avroplane-tool.jar."""[0m | |
[1m paths = [[0m | |
[1m '/opt/tubular/lib/avro-tools.jar',[0m | |
[1m os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),[0m | |
[1m ][0m | |
[1m for path in paths:[0m | |
[1m if os.path.exists(path):[0m | |
[1m return path[0m | |
[1m [0m | |
[1m raise NotImplementedError('Cannot find avro-tools.jar archive locally '[0m | |
[1m 'and failed to download. '[0m | |
[1m> 'Expected location is one of: {}'.format(paths))[0m | |
[1m[31mE NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar'][0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:631: NotImplementedError | |
[1m[31m______________ TestDatabusMergerWorkflow.test_workflow_no_merging ______________[0m | |
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow_no_merging> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m [0m | |
[1m install_databus_reader_and_writer()[0m | |
[1m self.table_name = 'test_castor_databus'[0m | |
[1m self.topic = 'test.castor.databus.r2'[0m | |
[1m self.topic_no_merge = 'test.castor.databus.no.merge.r2'[0m | |
[1m self.topic_arrays = 'test.castor.databus.arrays.r2'[0m | |
[1m self.schemas_root = absolute_path(__file__, 'resources', 'databus')[0m | |
[1m [0m | |
[1m # publish schema to schema-registry[0m | |
[1m sr_remote = RemoteSchemaRegistryClient(self.sr_host)[0m | |
[1m idl = IDLUtils(sr_remote)[0m | |
[1m> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)[0m | |
[1m[31mtests/integration/test_databus_merger.py[0m:41: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:69: in __init__ | |
[1m self._cache = self.discover_schemas(base_path, idl)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:137: in discover_schemas | |
[1m schema = idl.to_json(schema_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:545: in to_json | |
[1m topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:500: in idl_to_topic_schemas | |
[1m self._locate_avrotools(),[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
cls = <class 'avroplane.fs.IDLUtils'> | |
[1m @classmethod[0m | |
[1m def _locate_avrotools(cls):[0m | |
[1m """Tried different places to find avroplane-tool.jar."""[0m | |
[1m paths = [[0m | |
[1m '/opt/tubular/lib/avro-tools.jar',[0m | |
[1m os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),[0m | |
[1m ][0m | |
[1m for path in paths:[0m | |
[1m if os.path.exists(path):[0m | |
[1m return path[0m | |
[1m [0m | |
[1m raise NotImplementedError('Cannot find avro-tools.jar archive locally '[0m | |
[1m 'and failed to download. '[0m | |
[1m> 'Expected location is one of: {}'.format(paths))[0m | |
[1m[31mE NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar'][0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:631: NotImplementedError | |
[1m[31m__________ TestDatabusMergerWorkflow.test_workflow_with_array_unpack ___________[0m | |
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow_with_array_unpack> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m [0m | |
[1m install_databus_reader_and_writer()[0m | |
[1m self.table_name = 'test_castor_databus'[0m | |
[1m self.topic = 'test.castor.databus.r2'[0m | |
[1m self.topic_no_merge = 'test.castor.databus.no.merge.r2'[0m | |
[1m self.topic_arrays = 'test.castor.databus.arrays.r2'[0m | |
[1m self.schemas_root = absolute_path(__file__, 'resources', 'databus')[0m | |
[1m [0m | |
[1m # publish schema to schema-registry[0m | |
[1m sr_remote = RemoteSchemaRegistryClient(self.sr_host)[0m | |
[1m idl = IDLUtils(sr_remote)[0m | |
[1m> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)[0m | |
[1m[31mtests/integration/test_databus_merger.py[0m:41: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:69: in __init__ | |
[1m self._cache = self.discover_schemas(base_path, idl)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:137: in discover_schemas | |
[1m schema = idl.to_json(schema_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:545: in to_json | |
[1m topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:500: in idl_to_topic_schemas | |
[1m self._locate_avrotools(),[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
cls = <class 'avroplane.fs.IDLUtils'> | |
[1m @classmethod[0m | |
[1m def _locate_avrotools(cls):[0m | |
[1m """Tried different places to find avroplane-tool.jar."""[0m | |
[1m paths = [[0m | |
[1m '/opt/tubular/lib/avro-tools.jar',[0m | |
[1m os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),[0m | |
[1m ][0m | |
[1m for path in paths:[0m | |
[1m if os.path.exists(path):[0m | |
[1m return path[0m | |
[1m [0m | |
[1m raise NotImplementedError('Cannot find avro-tools.jar archive locally '[0m | |
[1m 'and failed to download. '[0m | |
[1m> 'Expected location is one of: {}'.format(paths))[0m | |
[1m[31mE NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar'][0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:631: NotImplementedError | |
[1m[31m_________ TestDatabusMergerWorkflow.test_workflow_with_corrupted_delta _________[0m | |
self = <tests.integration.test_databus_merger.TestDatabusMergerWorkflow testMethod=test_workflow_with_corrupted_delta> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m [0m | |
[1m install_databus_reader_and_writer()[0m | |
[1m self.table_name = 'test_castor_databus'[0m | |
[1m self.topic = 'test.castor.databus.r2'[0m | |
[1m self.topic_no_merge = 'test.castor.databus.no.merge.r2'[0m | |
[1m self.topic_arrays = 'test.castor.databus.arrays.r2'[0m | |
[1m self.schemas_root = absolute_path(__file__, 'resources', 'databus')[0m | |
[1m [0m | |
[1m # publish schema to schema-registry[0m | |
[1m sr_remote = RemoteSchemaRegistryClient(self.sr_host)[0m | |
[1m idl = IDLUtils(sr_remote)[0m | |
[1m> sr_local = LocalSchemaRegistryClient(self.schemas_root, idl)[0m | |
[1m[31mtests/integration/test_databus_merger.py[0m:41: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:69: in __init__ | |
[1m self._cache = self.discover_schemas(base_path, idl)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:137: in discover_schemas | |
[1m schema = idl.to_json(schema_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:545: in to_json | |
[1m topic, key_schema, value_schema = self.idl_to_topic_schemas(file_path)[0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:500: in idl_to_topic_schemas | |
[1m self._locate_avrotools(),[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
cls = <class 'avroplane.fs.IDLUtils'> | |
[1m @classmethod[0m | |
[1m def _locate_avrotools(cls):[0m | |
[1m """Tried different places to find avroplane-tool.jar."""[0m | |
[1m paths = [[0m | |
[1m '/opt/tubular/lib/avro-tools.jar',[0m | |
[1m os.path.join(os.path.expanduser("~"), '.avroplane', 'avro-tools.jar'),[0m | |
[1m ][0m | |
[1m for path in paths:[0m | |
[1m if os.path.exists(path):[0m | |
[1m return path[0m | |
[1m [0m | |
[1m raise NotImplementedError('Cannot find avro-tools.jar archive locally '[0m | |
[1m 'and failed to download. '[0m | |
[1m> 'Expected location is one of: {}'.format(paths))[0m | |
[1m[31mE NotImplementedError: Cannot find avro-tools.jar archive locally and failed to download. Expected location is one of: ['/opt/tubular/lib/avro-tools.jar', '/Users/pavloskliar/.avroplane/avro-tools.jar'][0m | |
[1m[31m../pkg_avroplane/avroplane/fs.py[0m:631: NotImplementedError | |
[1m[31m_____________________ TestDatabusMergerWorkflow.test_base ______________________[0m | |
a = ('xro335', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro335' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_base> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro335', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_______ TestDatabusMergerWorkflow.test_checkpoints_for_different_topics ________[0m | |
a = ('xro357', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro357' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_checkpoints_for_different_topics> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro357', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestDatabusMergerWorkflow.test_empty_delta __________________[0m | |
a = ('xro379', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro379' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_empty_delta> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro379', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m___________ TestDatabusMergerWorkflow.test_empty_delta_after_filter ____________[0m | |
a = ('xro401', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro401' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_empty_delta_after_filter> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro401', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m______________ TestDatabusMergerWorkflow.test_recovery_checkpoint ______________[0m | |
a = ('xro423', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro423' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_recovery_checkpoint> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro423', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________ TestDatabusMergerWorkflow.test_recovery_checkpoint_and_step __________[0m | |
a = ('xro445', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro445' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_recovery_checkpoint_and_step> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro445', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________ TestDatabusMergerWorkflow.test_recovery_checkpoint_disabled __________[0m | |
a = ('xro467', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro467' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusMergerWorkflow testMethod=test_recovery_checkpoint_disabled> | |
[1m def setUp(self):[0m | |
[1m super(TestDatabusMergerWorkflow, self).setUp()[0m | |
[1m> test_df = self.spark.read.parquet(self.input_data_path)[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:23: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro467', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestDatabusS3DeltaSource.test_include_last __________________[0m | |
a = ('xro489', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro489' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusS3DeltaSource testMethod=test_include_last> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m> self.spark.sql('CREATE DATABASE IF NOT EXISTS test')[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:530: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro489', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________ TestDatabusS3DeltaSource.test_include_last_and_overlap ____________[0m | |
a = ('xro509', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro509' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusS3DeltaSource testMethod=test_include_last_and_overlap> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m> self.spark.sql('CREATE DATABASE IF NOT EXISTS test')[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:530: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro509', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________________ TestDatabusS3DeltaSource.test_overlap _____________________[0m | |
a = ('xro529', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro529' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_databus_s3_merger.TestDatabusS3DeltaSource testMethod=test_overlap> | |
[1m def setUp(self):[0m | |
[1m super().setUp()[0m | |
[1m [0m | |
[1m> self.spark.sql('CREATE DATABASE IF NOT EXISTS test')[0m | |
[1m[31mtests/integration/test_databus_s3_merger.py[0m:530: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro529', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m________________________ TestElasticLoader.test_simple _________________________[0m | |
a = ('xro549', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro549' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_elastic_loader.TestElasticLoader testMethod=test_simple> | |
[1m def test_simple(self):[0m | |
[1m> self.loader.run()[0m | |
[1m[31mtests/integration/test_elastic_loader.py[0m:38: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mcastor/loader.py[0m:87: in run | |
[1m df = self._spark.read_ext.by_url(self._input_path)[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:108: in by_url | |
[1m return resolver(parsed_url, parsed_qs)[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:340: in _resolve_elastic | |
[1m **kwargs[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:180: in elastic | |
[1m return self._basic_read(reader_options, options, parallelism)[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:291: in _basic_read | |
[1m df = self._spark.read.load(**reader_options)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro549', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________________ TestElasticMerge.test_elastic_merge_workflow _________________[0m | |
a = ('xro571', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro571' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'currentDatabase' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_elastic_merger.TestElasticMerge testMethod=test_elastic_merge_workflow> | |
[1m def test_elastic_merge_workflow(self):[0m | |
[1m self.base_s3 = '/tmp/tubular-tests/castor/delta/{}/'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m query_template = 'elastic://{host}:{port}/{es_index}/{es_type}'.format([0m | |
[1m host=self.ELASTIC_HOST,[0m | |
[1m port=self.ELASTIC_PORT,[0m | |
[1m es_index=self.ELASTIC_INDEX,[0m | |
[1m es_type=self.ELASTIC_TYPE,[0m | |
[1m )[0m | |
[1m query_template += '?q={query}&scroll_field=import_date'[0m | |
[1m [0m | |
[1m merger = Merger([0m | |
[1m spark=self.spark,[0m | |
[1m delta_source=ElasticDeltaSource,[0m | |
[1m delta_splitter=([0m | |
[1m 'by_expression|'[0m | |
[1m 'nvl(date_format(publish_date, "y"), "-undefined")'[0m | |
[1m ),[0m | |
[1m merge_type='replace',[0m | |
[1m input_path=query_template,[0m | |
[1m output_delta_path=os.path.join(self.base_s3, 'delta=1'),[0m | |
[1m output_schema={[0m | |
[1m 'doc_id': "_metadata['_id']",[0m | |
[1m 'views': 'views',[0m | |
[1m '_metadata': '_metadata',[0m | |
[1m 'import_date': 'import_date',[0m | |
[1m 'publish_date': 'publish_date',[0m | |
[1m 'publish_month': "nvl(date_format(publish_date, 'y-MM'), '-undefined')",[0m | |
[1m },[0m | |
[1m output_unique_by=['doc_id'],[0m | |
[1m output_resolve_by=None,[0m | |
[1m output_partition_by=['publish_month'],[0m | |
[1m output_table='castor_elastic',[0m | |
[1m warehouse_path='/tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex),[0m | |
[1m )[0m | |
[1m [0m | |
[1m with mock.patch.object([0m | |
[1m merger,[0m | |
[1m 'input_path',[0m | |
[1m merger.input_path + '&max_docs=3',[0m | |
[1m ), mock.patch.object([0m | |
[1m merger.delta_source,[0m | |
[1m 'MAX_DOCS_DIFF',[0m | |
[1m 1,[0m | |
[1m ):[0m | |
[1m> merger.run()[0m | |
[1m[31mtests/integration/test_elastic_merger.py[0m:80: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mcastor/merger.py[0m:209: in run | |
[1m recovery_step=self.recovery_step,[0m | |
[1m[31mcastor/delta_source/elastic.py[0m:44: in get_unprocessed_parts | |
[1m if spark.catalog_ext.has_table(output_table):[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:123: in has_table | |
[1m for table in self._spark.catalog.listTables(db_name):[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:81: in listTables | |
[1m dbName = self.currentDatabase()[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:50: in currentDatabase | |
[1m return self._jcatalog.currentDatabase()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro571', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_______________ TestKafkaMerger.test_calculate_unprocessed_parts _______________[0m | |
a = ('xro592', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro592' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_calculate_unprocessed_parts> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro592', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m______________________ TestKafkaMerger.test_get_delta_df _______________________[0m | |
a = ('xro613', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro613' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_delta_df> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro613', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_______________ TestKafkaMerger.test_get_multiple_topics_offsets _______________[0m | |
a = ('xro634', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro634' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_multiple_topics_offsets> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro634', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________________ TestKafkaMerger.test_get_table_offsets ____________________[0m | |
a = ('xro655', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro655' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_table_offsets> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro655', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________________ TestKafkaMerger.test_get_topic_offsets ____________________[0m | |
a = ('xro676', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro676' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_get_topic_offsets> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro676', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestKafkaMerger.test_kafka_merge_init_from __________________[0m | |
a = ('xro697', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro697' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_kafka_merge_init_from> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro697', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestKafkaMerger.test_kafka_merge_workflow ___________________[0m | |
a = ('xro718', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro718' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_kafka_merge_workflow> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro718', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________________ TestKafkaMerger.test_merge_to_non_default_db _________________[0m | |
a = ('xro739', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro739' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'listDatabases' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.listDatabases.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.listDatabases(CatalogImpl.scala:72)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_kafka_merger.TestKafkaMerger testMethod=test_merge_to_non_default_db> | |
[1m def setUp(self):[0m | |
[1m super(TestKafkaMerger, self).setUp()[0m | |
[1m self.base_s3 = 'file:///tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex)[0m | |
[1m self.table_name = 'test_castor'[0m | |
[1m self.topic = 'topic_{}'.format(uuid.uuid4().hex)[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_database('merge_db'):[0m | |
[1m[31mtests/integration/test_kafka_merger.py[0m:29: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:141: in has_database | |
[1m for db in self._spark.catalog.listDatabases():[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:62: in listDatabases | |
[1m iter = self._jcatalog.listDatabases().toLocalIterator()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro739', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'listDatabases') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m___________ TestLoaderPartitioned.test_load_to_non_default_database ____________[0m | |
a = ('xro760', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro760' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_loader_partitioned.TestLoaderPartitioned testMethod=test_load_to_non_default_database> | |
[1m def setUp(self):[0m | |
[1m super(TestLoaderPartitioned, self).setUp()[0m | |
[1m self.csv_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),[0m | |
[1m 'resources',[0m | |
[1m 'csv_setup.csv')[0m | |
[1m self.csv_file_2 = os.path.join(os.path.dirname(os.path.realpath(__file__)),[0m | |
[1m 'resources',[0m | |
[1m 'csv_setup_2.csv')[0m | |
[1m [0m | |
[1m self.loader = Loader([0m | |
[1m spark=self.spark,[0m | |
[1m input_path='csv://localhost{}?header=True'.format(self.csv_file),[0m | |
[1m output_table='csv_data',[0m | |
[1m output_partition_by=['country'],[0m | |
[1m warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),[0m | |
[1m )[0m | |
[1m [0m | |
[1m self.loader_2 = Loader([0m | |
[1m spark=self.spark,[0m | |
[1m input_path='csv://localhost{}?header=True'.format(self.csv_file_2),[0m | |
[1m output_schema=[('name', 'name'),[0m | |
[1m ('country', 'country'),[0m | |
[1m ('age', 'age'),[0m | |
[1m ('platform', 'platform'),[0m | |
[1m ('`1-2_3`', '`1-2_3`'),[0m | |
[1m ('`a__--b`', 'substr(`1-2_3`, 0, 3)')],[0m | |
[1m output_table='csv_data',[0m | |
[1m output_partition_by=['platform', 'country'],[0m | |
[1m # Unfortunately we can't check with a small dataset that we actually have 2 files,[0m | |
[1m # because they more likely fall into a single file.[0m | |
[1m # At least we can check that the code isn't broken.[0m | |
[1m output_partitions=2,[0m | |
[1m warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),[0m | |
[1m )[0m | |
[1m> self.spark.sql("DROP TABLE IF EXISTS csv_data")[0m | |
[1m[31mtests/integration/test_loader_partitioned.py[0m:47: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro760', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m___________________ TestLoaderPartitioned.test_partitioning ____________________[0m | |
a = ('xro780', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro780' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'sql' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.sql.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:623)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 17 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 34 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 39 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_loader_partitioned.TestLoaderPartitioned testMethod=test_partitioning> | |
[1m def setUp(self):[0m | |
[1m super(TestLoaderPartitioned, self).setUp()[0m | |
[1m self.csv_file = os.path.join(os.path.dirname(os.path.realpath(__file__)),[0m | |
[1m 'resources',[0m | |
[1m 'csv_setup.csv')[0m | |
[1m self.csv_file_2 = os.path.join(os.path.dirname(os.path.realpath(__file__)),[0m | |
[1m 'resources',[0m | |
[1m 'csv_setup_2.csv')[0m | |
[1m [0m | |
[1m self.loader = Loader([0m | |
[1m spark=self.spark,[0m | |
[1m input_path='csv://localhost{}?header=True'.format(self.csv_file),[0m | |
[1m output_table='csv_data',[0m | |
[1m output_partition_by=['country'],[0m | |
[1m warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),[0m | |
[1m )[0m | |
[1m [0m | |
[1m self.loader_2 = Loader([0m | |
[1m spark=self.spark,[0m | |
[1m input_path='csv://localhost{}?header=True'.format(self.csv_file_2),[0m | |
[1m output_schema=[('name', 'name'),[0m | |
[1m ('country', 'country'),[0m | |
[1m ('age', 'age'),[0m | |
[1m ('platform', 'platform'),[0m | |
[1m ('`1-2_3`', '`1-2_3`'),[0m | |
[1m ('`a__--b`', 'substr(`1-2_3`, 0, 3)')],[0m | |
[1m output_table='csv_data',[0m | |
[1m output_partition_by=['platform', 'country'],[0m | |
[1m # Unfortunately we can't check with a small dataset that we actually have 2 files,[0m | |
[1m # because they more likely fall into a single file.[0m | |
[1m # At least we can check that the code isn't broken.[0m | |
[1m output_partitions=2,[0m | |
[1m warehouse_path='/tmp/castor/{}/'.format(uuid.uuid4().hex),[0m | |
[1m )[0m | |
[1m> self.spark.sql("DROP TABLE IF EXISTS csv_data")[0m | |
[1m[31mtests/integration/test_loader_partitioned.py[0m:47: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:556: in sql | |
[1m return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro780', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'sql') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________________________ TestMysqlLoader.test_loader __________________________[0m | |
a = ('xro800', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro800' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o27', name = 'read' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o27.read.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.DataFrameReader.<init>(DataFrameReader.scala:689)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.read(SparkSession.scala:636)[0m | |
[1m[31mE at org.apache.spark.sql.SQLContext.read(SQLContext.scala:504)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_mysql_loader.TestMysqlLoader testMethod=test_loader> | |
[1m def test_loader(self):[0m | |
[1m> self.loader.run()[0m | |
[1m[31mtests/integration/test_mysql_loader.py[0m:40: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mcastor/loader.py[0m:87: in run | |
[1m df = self._spark.read_ext.by_url(self._input_path)[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:108: in by_url | |
[1m return resolver(parsed_url, parsed_qs)[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:350: in _resolve_mysql | |
[1m options=parsed_qs,[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:214: in mysql | |
[1m return self._basic_read(reader_options, options, parallelism)[0m | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/reader.py[0m:291: in _basic_read | |
[1m df = self._spark.read.load(**reader_options)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:580: in read | |
[1m return DataFrameReader(self._wrapped)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/readwriter.py[0m:70: in __init__ | |
[1m self._jreader = spark._ssql_ctx.read()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro800', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o27', 'read') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________________ TestQuantumMerger.test_kafka_merge_workflow __________________[0m | |
a = ('xro834', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro834' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_quantum_merger.TestQuantumMerger testMethod=test_kafka_merge_workflow> | |
[1m def test_kafka_merge_workflow(self):[0m | |
[1m self.castor = Merger([0m | |
[1m spark=self.spark,[0m | |
[1m delta_source=DatabusDeltaSource,[0m | |
[1m delta_splitter='no_split',[0m | |
[1m merge_type='quantum',[0m | |
[1m input_path='databus://{}/{}?sr={}&port={}'.format([0m | |
[1m self.kafka_host, self.topic, self.sr_address, self.kafka_port),[0m | |
[1m input_schema={},[0m | |
[1m output_table=self.table_name,[0m | |
[1m output_files_per_partition=2,[0m | |
[1m warehouse_path='/tmp/tubular-tests/castor/{}/'.format(uuid.uuid4().hex),[0m | |
[1m )[0m | |
[1m [0m | |
[1m insights = [[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': '2017-11-16',[0m | |
[1m 'metric': 'page_video_views_organic',[0m | |
[1m 'period': 'day',[0m | |
[1m 'report_name': 'facebook_page_views_3s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 2000.0,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520000000,[0m | |
[1m },[0m | |
[1m # Two next message to test delta dedup for day-period data[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': '2017-11-17',[0m | |
[1m 'metric': 'page_video_views_organic',[0m | |
[1m 'period': 'day',[0m | |
[1m 'report_name': 'facebook_page_views_3s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 1000.0,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520000000,[0m | |
[1m },[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': '2017-11-17',[0m | |
[1m 'metric': 'page_video_views_organic',[0m | |
[1m 'period': 'day',[0m | |
[1m 'report_name': 'facebook_page_views_3s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 1000.0,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520000010,[0m | |
[1m },[0m | |
[1m # Two next message to test delta dedup for multi-dim data[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_39355118462',[0m | |
[1m 'end_date': '2018-02-12',[0m | |
[1m 'metric': 'page_fans_locale',[0m | |
[1m 'period': 'lifetime',[0m | |
[1m 'report_name': 'facebook_page_fans_demo_v1',[0m | |
[1m 'value_mult': {[0m | |
[1m 'ar_AR': 1000.0,[0m | |
[1m 'bg_BG': 100.0,[0m | |
[1m 'cs_CZ': 10.0,[0m | |
[1m },[0m | |
[1m 'value_single': None,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520000000,[0m | |
[1m },[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_39355118462',[0m | |
[1m 'end_date': '2018-02-12',[0m | |
[1m 'metric': 'page_fans_locale',[0m | |
[1m 'period': 'lifetime',[0m | |
[1m 'report_name': 'facebook_page_fans_demo_v1',[0m | |
[1m 'value_mult': {[0m | |
[1m 'ar_AR': 1010.0,[0m | |
[1m 'bg_BG': 100.0,[0m | |
[1m 'cs_CZ': 10.0,[0m | |
[1m },[0m | |
[1m 'value_single': None,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520000100,[0m | |
[1m },[0m | |
[1m # Two next messages to test delta dedup for continuous data, as well as `date` field[0m | |
[1m # creation.[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': None,[0m | |
[1m 'metric': 'post_video_views_10s_organic',[0m | |
[1m 'period': 'lifetime',[0m | |
[1m 'report_name': 'facebook_post_video_views_10s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 2000.0,[0m | |
[1m 'video_gid': 'fbv_12345678',[0m | |
[1m 'fetch_time': 1520000200,[0m | |
[1m },[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': None,[0m | |
[1m 'metric': 'post_video_views_10s_organic',[0m | |
[1m 'period': 'lifetime',[0m | |
[1m 'report_name': 'facebook_post_video_views_10s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 3000.0,[0m | |
[1m 'video_gid': 'fbv_12345678',[0m | |
[1m 'fetch_time': 1520000500,[0m | |
[1m },[0m | |
[1m [0m | |
[1m # Will test dedup for merged data and new data from delta for day-period data[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': '2017-11-17',[0m | |
[1m 'metric': 'page_video_views_organic',[0m | |
[1m 'period': 'day',[0m | |
[1m 'report_name': 'facebook_page_views_3s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 1000.0,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520001000,[0m | |
[1m },[0m | |
[1m # Will test dedup for merged data and new data from delta for multi-dim data[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_39355118462',[0m | |
[1m 'end_date': '2018-02-12',[0m | |
[1m 'metric': 'page_fans_locale',[0m | |
[1m 'period': 'lifetime',[0m | |
[1m 'report_name': 'facebook_page_fans_demo_v1',[0m | |
[1m 'value_mult': {[0m | |
[1m 'ar_AR': 1100.0,[0m | |
[1m 'bg_BG': 100.0,[0m | |
[1m 'cs_CZ': 10.0,[0m | |
[1m },[0m | |
[1m 'value_single': None,[0m | |
[1m 'video_gid': None,[0m | |
[1m 'fetch_time': 1520001000,[0m | |
[1m },[0m | |
[1m # Test dedup for merged data and new data from delta for continious data.[0m | |
[1m {[0m | |
[1m 'account_gid': 'fba_9258148868',[0m | |
[1m 'end_date': None,[0m | |
[1m 'metric': 'post_video_views_10s_organic',[0m | |
[1m 'period': 'lifetime',[0m | |
[1m 'report_name': 'facebook_post_video_views_10s_v1',[0m | |
[1m 'value_mult': None,[0m | |
[1m 'value_single': 4000.0,[0m | |
[1m 'video_gid': 'fbv_12345678',[0m | |
[1m 'fetch_time': 1520001300,[0m | |
[1m },[0m | |
[1m [0m | |
[1m ][0m | |
[1m> self._publish_data(insights[:7])[0m | |
[1m[31mtests/integration/test_quantum_merger.py[0m:236: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mtests/integration/test_quantum_merger.py[0m:80: in _publish_data | |
[1m self.databus_schema,[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro834', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m____________________ TestAvroSchemaEvolution.test_add_field ____________________[0m | |
a = ('xro876', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro876' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'currentDatabase' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_add_field> | |
[1m def setUp(self):[0m | |
[1m self._register_avro_schemas()[0m | |
[1m [0m | |
[1m self.output_table = 'test_castor_avro'[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_table(self.output_table):[0m | |
[1m[31mtests/integration/test_schema_evolution.py[0m:118: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:123: in has_table | |
[1m for table in self._spark.catalog.listTables(db_name):[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:81: in listTables | |
[1m dbName = self.currentDatabase()[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:50: in currentDatabase | |
[1m return self._jcatalog.currentDatabase()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro876', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_____________ TestAvroSchemaEvolution.test_add_field_modify_schema _____________[0m | |
a = ('xro897', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro897' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'currentDatabase' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_add_field_modify_schema> | |
[1m def setUp(self):[0m | |
[1m self._register_avro_schemas()[0m | |
[1m [0m | |
[1m self.output_table = 'test_castor_avro'[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_table(self.output_table):[0m | |
[1m[31mtests/integration/test_schema_evolution.py[0m:118: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:123: in has_table | |
[1m for table in self._spark.catalog.listTables(db_name):[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:81: in listTables | |
[1m dbName = self.currentDatabase()[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:50: in currentDatabase | |
[1m return self._jcatalog.currentDatabase()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro897', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestAvroSchemaEvolution.test_delete_field ___________________[0m | |
a = ('xro918', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro918' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'currentDatabase' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_delete_field> | |
[1m def setUp(self):[0m | |
[1m self._register_avro_schemas()[0m | |
[1m [0m | |
[1m self.output_table = 'test_castor_avro'[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_table(self.output_table):[0m | |
[1m[31mtests/integration/test_schema_evolution.py[0m:118: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:123: in has_table | |
[1m for table in self._spark.catalog.listTables(db_name):[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:81: in listTables | |
[1m dbName = self.currentDatabase()[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:50: in currentDatabase | |
[1m return self._jcatalog.currentDatabase()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro918', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m___________ TestAvroSchemaEvolution.test_remove_field_modify_schema ____________[0m | |
a = ('xro939', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro939' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'currentDatabase' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_remove_field_modify_schema> | |
[1m def setUp(self):[0m | |
[1m self._register_avro_schemas()[0m | |
[1m [0m | |
[1m self.output_table = 'test_castor_avro'[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_table(self.output_table):[0m | |
[1m[31mtests/integration/test_schema_evolution.py[0m:118: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:123: in has_table | |
[1m for table in self._spark.catalog.listTables(db_name):[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:81: in listTables | |
[1m dbName = self.currentDatabase()[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:50: in currentDatabase | |
[1m return self._jcatalog.currentDatabase()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro939', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m__________________ TestAvroSchemaEvolution.test_rename_field ___________________[0m | |
a = ('xro960', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro960' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o28', name = 'currentDatabase' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o28.currentDatabase.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.org$apache$spark$sql$internal$CatalogImpl$$sessionCatalog(CatalogImpl.scala:40)[0m | |
[1m[31mE at org.apache.spark.sql.internal.CatalogImpl.currentDatabase(CatalogImpl.scala:57)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 18 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 35 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 40 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_schema_evolution.TestAvroSchemaEvolution testMethod=test_rename_field> | |
[1m def setUp(self):[0m | |
[1m self._register_avro_schemas()[0m | |
[1m [0m | |
[1m self.output_table = 'test_castor_avro'[0m | |
[1m [0m | |
[1m> if self.spark.catalog_ext.has_table(self.output_table):[0m | |
[1m[31mtests/integration/test_schema_evolution.py[0m:118: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31m../../pypi__sparkly_2_5_1/sparkly/catalog.py[0m:123: in has_table | |
[1m for table in self._spark.catalog.listTables(db_name):[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:81: in listTables | |
[1m dbName = self.currentDatabase()[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/catalog.py[0m:50: in currentDatabase | |
[1m return self._jcatalog.currentDatabase()[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro960', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o28', 'currentDatabase') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_________ TestTwitchStreamMerger.test_account_level_values_per_segment _________[0m | |
a = ('xro993', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro993' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_account_level_values_per_segment> | |
[1m def test_account_level_values_per_segment(self):[0m | |
[1m """Tests if account gid, language, viewers are assigned properly."""[0m | |
[1m delta = with_fields([0m | |
[1m self.MESSAGE,[0m | |
[1m {[0m | |
[1m 'game.name': ['Fifa', 'Fifa', 'WoW', 'WoW', 'Fortnite', 'Fortnite'],[0m | |
[1m 'account.gid': ['tta_johny', 'tta_johny',[0m | |
[1m 'tta_john', 'tta_john',[0m | |
[1m 'tta_johny', 'tta_johny'],[0m | |
[1m 'language': ['br', 'br', 'en', 'it', 'it', 'it'],[0m | |
[1m 'fetch_time': [[0m | |
[1m datetime(2017, 1, 1) + _ for _ in [[0m | |
[1m timedelta(hours=1),[0m | |
[1m timedelta(hours=2),[0m | |
[1m timedelta(hours=3),[0m | |
[1m timedelta(hours=4),[0m | |
[1m timedelta(hours=5),[0m | |
[1m timedelta(hours=6),[0m | |
[1m ][0m | |
[1m ],[0m | |
[1m },[0m | |
[1m )[0m | |
[1m [0m | |
[1m> self._get_merger(delta[:-2]).run()[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:536: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:1286: in _get_merger | |
[1m self._create_delta_df(delta),[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:1344: in _create_delta_df | |
[1m parse_schema(schema),[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro993', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m_______ TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_chunked _______[0m | |
a = ('xro1047', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1047' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_chunked> | |
[1m def test_ascii_trapezoid_double_ticks_chunked(self):[0m | |
[1m """Process trapezoid in chunks."""[0m | |
[1m delta = with_fields([0m | |
[1m self.MESSAGE,[0m | |
[1m self.TRAPEZOID_CONF,[0m | |
[1m )[::2][0m | |
[1m [0m | |
[1m> self._get_merger(delta[:3]).run()[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:878: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:1286: in _get_merger | |
[1m self._create_delta_df(delta),[0m | |
[1m[31mtests/integration/test_twitch_streams_merger.py[0m:1344: in _create_delta_df | |
[1m parse_schema(schema),[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/session.py[0m:539: in createDataFrame | |
[1m jdf = self._jsparkSession.applySchemaToPythonRDD(jrdd.rdd(), schema.json())[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/java_gateway.py[0m:1133: in __call__ | |
[1m answer, self.gateway_client, self.target_id, self.name)[0m | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
a = ('xro1047', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m return f(*a, **kw)[0m | |
[1m except py4j.protocol.Py4JJavaError as e:[0m | |
[1m s = e.java_exception.toString()[0m | |
[1m stackTrace = '\n\t at '.join(map(lambda x: x.toString(),[0m | |
[1m e.java_exception.getStackTrace()))[0m | |
[1m if s.startswith('org.apache.spark.sql.AnalysisException: '):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.analysis'):[0m | |
[1m raise AnalysisException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.catalyst.parser.ParseException: '):[0m | |
[1m raise ParseException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.streaming.StreamingQueryException: '):[0m | |
[1m raise StreamingQueryException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('org.apache.spark.sql.execution.QueryExecutionException: '):[0m | |
[1m raise QueryExecutionException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m if s.startswith('java.lang.IllegalArgumentException: '):[0m | |
[1m> raise IllegalArgumentException(s.split(': ', 1)[1], stackTrace)[0m | |
[1m[31mE pyspark.sql.utils.IllegalArgumentException: "Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':"[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:79: IllegalArgumentException | |
[1m[31m________ TestTwitchStreamMerger.test_ascii_trapezoid_double_ticks_whole ________[0m | |
a = ('xro1101', <py4j.java_gateway.GatewayClient object at 0x10ad8cf60>, 'o26', 'applySchemaToPythonRDD') | |
kw = {} | |
s = "java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':" | |
stackTrace = 'org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053...)\n\t at py4j.GatewayConnection.run(GatewayConnection.java:214)\n\t at java.base/java.lang.Thread.run(Thread.java:834)' | |
[1m def deco(*a, **kw):[0m | |
[1m try:[0m | |
[1m> return f(*a, **kw)[0m | |
[1m[31m../../pypi__pyspark_2_2_0/pyspark/sql/utils.py[0m:63: | |
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ | |
answer = 'xro1101' | |
gateway_client = <py4j.java_gateway.GatewayClient object at 0x10ad8cf60> | |
target_id = 'o26', name = 'applySchemaToPythonRDD' | |
[1m def get_return_value(answer, gateway_client, target_id=None, name=None):[0m | |
[1m """Converts an answer received from the Java gateway into a Python object.[0m | |
[1m [0m | |
[1m For example, string representation of integers are converted to Python[0m | |
[1m integer, string representation of objects are converted to JavaObject[0m | |
[1m instances, etc.[0m | |
[1m [0m | |
[1m :param answer: the string returned by the Java gateway[0m | |
[1m :param gateway_client: the gateway client used to communicate with the Java[0m | |
[1m Gateway. Only necessary if the answer is a reference (e.g., object,[0m | |
[1m list, map)[0m | |
[1m :param target_id: the name of the object from which the answer comes from[0m | |
[1m (e.g., *object1* in `object1.hello()`). Optional.[0m | |
[1m :param name: the name of the member from which the answer comes from[0m | |
[1m (e.g., *hello* in `object1.hello()`). Optional.[0m | |
[1m """[0m | |
[1m if is_error(answer)[0]:[0m | |
[1m if len(answer) > 1:[0m | |
[1m type = answer[1][0m | |
[1m value = OUTPUT_CONVERTER[type](answer[2:], gateway_client)[0m | |
[1m if answer[1] == REFERENCE_TYPE:[0m | |
[1m raise Py4JJavaError([0m | |
[1m "An error occurred while calling {0}{1}{2}.\n".[0m | |
[1m> format(target_id, ".", name), value)[0m | |
[1m[31mE py4j.protocol.Py4JJavaError: An error occurred while calling o26.applySchemaToPythonRDD.[0m | |
[1m[31mE : java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionStateBuilder':[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1053)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$$anonfun$sessionState$2.apply(SparkSession.scala:130)[0m | |
[1m[31mE at scala.Option.getOrElse(Option.scala:121)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState$lzycompute(SparkSession.scala:129)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.sessionState(SparkSession.scala:126)[0m | |
[1m[31mE at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:729)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession.applySchemaToPythonRDD(SparkSession.scala:719)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)[0m | |
[1m[31mE at java.base/java.lang.reflect.Method.invoke(Method.java:566)[0m | |
[1m[31mE at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)[0m | |
[1m[31mE at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)[0m | |
[1m[31mE at py4j.Gateway.invoke(Gateway.java:280)[0m | |
[1m[31mE at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)[0m | |
[1m[31mE at py4j.commands.CallCommand.execute(CallCommand.java:79)[0m | |
[1m[31mE at py4j.GatewayConnection.run(GatewayConnection.java:214)[0m | |
[1m[31mE at java.base/java.lang.Thread.run(Thread.java:834)[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState when creating Hive client using classpath: file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-streaming-kafka-0-8_2.11-2.1.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-sql-kafka-0-10_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.elasticsearch_elasticsearch-spark-20_2.11-6.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/com.tubularlabs_confluent-spark-avro_2.11-1.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/datastax_spark-cassandra-connector-2.0.0-M2-s_2.11.jar, file:/Users/pavloskliar/.ivy2/jars/mysql_mysql-connector-java-5.1.39.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka_2.11-0.8.2.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.spark-project.spark_unused-1.0.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-xml_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.yammer.metrics_metrics-core-2.2.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang.modules_scala-parser-combinators_2.11-1.0.2.jar, file:/Users/pavloskliar/.ivy2/jars/com.101tec_zkclient-0.3.jar, file:/Users/pavloskliar/.ivy2/jars/org.slf4j_slf4j-api-1.7.16.jar, file:/Users/pavloskliar/.ivy2/jars/log4j_log4j-1.2.17.jar, file:/Users/pavloskliar/.ivy2/jars/net.jpountz.lz4_lz4-1.3.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.xerial.snappy_snappy-java-1.1.2.6.jar, file:/Users/pavloskliar/.ivy2/jars/org.scala-lang_scala-reflect-2.11.8.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.kafka_kafka-clients-0.10.0.1.jar, file:/Users/pavloskliar/.ivy2/jars/org.apache.spark_spark-tags_2.11-2.1.1.jar, file:/Users/pavloskliar/.ivy2/jars/commons-beanutils_commons-beanutils-1.8.0.jar, file:/Users/pavloskliar/.ivy2/jars/org.joda_joda-convert-1.2.jar, file:/Users/pavloskliar/.ivy2/jars/joda-time_joda-time-2.3.jar, file:/Users/pavloskliar/.ivy2/jars/io.netty_netty-all-4.0.33.Final.jar, file:/Users/pavloskliar/.ivy2/jars/com.twitter_jsr166e-1.1.0.jar[0m | |
[1m[31mE Please make sure that jars for your version of hive and hadoop are included in the paths passed to spark.sql.hive.metastore.jars.[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:270)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:362)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveUtils$.newClientForMetadata(HiveUtils.scala:266)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client$lzycompute(HiveExternalCatalog.scala:66)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.client(HiveExternalCatalog.scala:65)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply$mcZ$sp(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog$$anonfun$databaseExists$1.apply(HiveExternalCatalog.scala:194)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:97)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveExternalCatalog.databaseExists(HiveExternalCatalog.scala:193)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog$lzycompute(SharedState.scala:105)[0m | |
[1m[31mE at org.apache.spark.sql.internal.SharedState.externalCatalog(SharedState.scala:93)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.externalCatalog(HiveSessionStateBuilder.scala:39)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog$lzycompute(HiveSessionStateBuilder.scala:54)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:52)[0m | |
[1m[31mE at org.apache.spark.sql.hive.HiveSessionStateBuilder.catalog(HiveSessionStateBuilder.scala:35)[0m | |
[1m[31mE at org.apache.spark.sql.internal.BaseSessionStateBuilder.build(BaseSessionStateBuilder.scala:289)[0m | |
[1m[31mE at org.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$instantiateSessionState(SparkSession.scala:1050)[0m | |
[1m[31mE ... 19 more[0m | |
[1m[31mE Caused by: java.lang.reflect.InvocationTargetException[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)[0m | |
[1m[31mE at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)[0m | |
[1m[31mE at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader.createClient(IsolatedClientLoader.scala:264)[0m | |
[1m[31mE ... 36 more[0m | |
[1m[31mE Caused by: java.lang.NoClassDefFoundError: org/apache/hadoop/hive/ql/session/SessionState[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.HiveClientImpl.<init>(HiveClientImpl.scala:136)[0m | |
[1m[31mE ... 41 more[0m | |
[1m[31mE Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.ql.session.SessionState[0m | |
[1m[31mE at java.base/java.net.URLClassLoader.findClass(URLClassLoader.java:471)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:588)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.doLoadClass(IsolatedClientLoader.scala:221)[0m | |
[1m[31mE at org.apache.spark.sql.hive.client.IsolatedClientLoader$$anon$1.loadClass(IsolatedClientLoader.scala:210)[0m | |
[1m[31mE at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:521)[0m | |
[1m[31mE ... 42 more[0m | |
[1m[31m../../pypi__py4j_0_10_4/py4j/protocol.py[0m:319: Py4JJavaError | |
[33mDuring handling of the above exception, another exception occurred:[0m | |
self = <tests.integration.test_twitch_streams_merger.TestTwitchStreamMerger testMethod=test_ascii_trapezoid_double_ticks_whole> | |
[1m def test_ascii_trapezoid_double_ticks_whole(self):[0m | |
[1m """Process trapezoid in as a whole."""[0m | |
[1m delta = with_fields([0m | |
[1m self.MESSAG |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment