Skip to content

Instantly share code, notes, and snippets.

@mschoch
Created March 30, 2017 12:43
Show Gist options
  • Save mschoch/2168045c388f7896bfcc120e0c2cbfdf to your computer and use it in GitHub Desktop.
Save mschoch/2168045c388f7896bfcc120e0c2cbfdf to your computer and use it in GitHub Desktop.
./testrunner -i cbft.ini -t fts.stable_topology_fts.StableTopFTS.test_sorting_of_results,items=100,sort_fields=languages_known,advanced_sort=True,sort_by=field,sort_missing=first,sort_desc=False,sort_mode=min,expected=emp10000001,emp10000071,emp10000042,cluster=D+F
Global Test input params:
{'cluster_name': 'cbft', 'ini': 'cbft.ini', 'num_nodes': 1}
Logs will be stored at /Users/mschoch/Documents/research/cbsource/testrunner/logs/testrunner-17-Mar-30_08-38-18/test_1
./testrunner -i cbft.ini -t fts.stable_topology_fts.StableTopFTS.test_sorting_of_results,items=100,sort_fields=languages_known,advanced_sort=True,sort_by=field,sort_missing=first,sort_desc=False,sort_mode=min,expected=emp10000001,emp10000071,emp10000042,cluster=D+F
Test Input params:
{'logs_folder': '/Users/mschoch/Documents/research/cbsource/testrunner/logs/testrunner-17-Mar-30_08-38-18/test_1', 'sort_desc': 'False', 'advanced_sort': 'True', 'items': '100', 'sort_fields': 'languages_known', 'sort_by': 'field', 'cluster_name': 'cbft', 'cluster': 'D+F', 'sort_missing': 'first', 'case_number': 1, 'expected': 'emp10000001,emp10000071,emp10000042', 'sort_mode': 'min', 'num_nodes': 1, 'ini': 'cbft.ini'}
Run before suite setup for fts.stable_topology_fts.StableTopFTS.test_sorting_of_results
test_sorting_of_results (fts.stable_topology_fts.StableTopFTS) ... 2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [fts_base.setUp] ==== FTSbasetests setup is started for test #1 test_sorting_of_results ====
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] removing nodes from cluster ...
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] cleanup [ip:127.0.0.1 port:9000 ssh_username:root]
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets [] on 127.0.0.1
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] Removing user 'cbadminbucket'...
2017-03-30 08:38:18 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://127.0.0.1:9000/settings/rbac/users/builtin/cbadminbucket error 404 reason: unknown "User was not found."
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] "User was not found."
2017-03-30 08:38:18 | INFO | MainProcess | test_thread | [fts_base.init_cluster] Initializing Cluster ...
2017-03-30 08:38:19 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:127.0.0.1 port:9000 ssh_username:root, nodes/self: {'ip': u'127.0.0.1', 'availableStorage': [], 'rest_username': '', 'id': u'[email protected]', 'uptime': u'68', 'mcdMemoryReserved': 13107, 'storageTotalRam': 14063, 'hostname': u'127.0.0.1:9000', 'storage': [<membase.api.rest_client.NodeDataStorage object at 0x102af4750>], 'moxi': 12001, 'port': u'9000', 'version': u'4.0.0r-2760-g52bccc9-enterprise', 'memcached': 12000, 'status': u'healthy', 'clusterCompatibility': 327680, 'curr_items': 0, 'services': [u'fts', u'index', u'kv', u'n1ql'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 4662521856, 'memoryTotal': 17179869184, 'memoryQuota': 2048, 'mcdMemoryAllocated': 13107, 'os': u'x86_64-apple-darwin13.4.0', 'ports': []}
2017-03-30 08:38:19 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8738
2017-03-30 08:38:19 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=forestdb
2017-03-30 08:38:19 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] http://127.0.0.1:9000/settings/indexes error 400 reason: unknown {"errors":{"storageMode":"Changing the optimization mode of global indexes is not supported when index service nodes are present in the cluster. Please remove all index service nodes to change this option."}}
2017-03-30 08:38:19 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 127.0.0.1:9000:username=Administrator&password=password&port=9000
2017-03-30 08:38:19 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://127.0.0.1:9000/settings/rbac/users/builtin/cbadminbucket error 404 reason: unknown "User was not found."
2017-03-30 08:38:19 | INFO | MainProcess | test_thread | [internal_user.delete_user] Exception while deleting user. Exception is -"User was not found."
2017-03-30 08:38:40 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://127.0.0.1:9000/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=8238&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
2017-03-30 08:38:40 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.03 seconds to create bucket default
2017-03-30 08:38:40 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 127.0.0.1 to accept set ops
2017-03-30 08:38:41 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:42 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:42 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'default' was created with per node RAM quota: 8238
2017-03-30 08:38:42 | INFO | MainProcess | test_thread | [fts_base.setUp] ==== FTSbasetests setup is finished for test #1 test_sorting_of_results ====
2017-03-30 08:38:42 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:43 | INFO | Process-3 | load_gen_task | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:43 | INFO | Process-4 | load_gen_task | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:43 | INFO | Process-2 | load_gen_task | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:43 | INFO | Process-5 | load_gen_task | [data_helper.direct_client] creating direct client 127.0.0.1:12000 default
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [fts_base.load_data] Loading phase complete!
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ...
2017-03-30 08:38:44 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://127.0.0.1:9200/api/index/default_index error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2017-03-30 08:38:44 | ERROR | MainProcess | test_thread | [rest_client._http_request] http://127.0.0.1:9200/api/index/default_index error 403 reason: status: 403, content: rest_auth: preparePerm, err: index not found
rest_auth: preparePerm, err: index not found
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index default_index on 127.0.0.1
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"params": {}, "name": "default_index", "planParams": {"numReplicas": 0, "maxPartitionsPerPIndex": 171}, "sourceName": "default", "sourceUUID": "", "sourceType": "couchbase", "type": "fulltext-index", "sourceParams": {"authUser": "default", "dataManagerSleepMaxMS": 20000, "authSaslUser": "", "clusterManagerSleepMaxMS": 20000, "authSaslPassword": "", "clusterManagerSleepInitMS": 0, "dataManagerBackoffFactor": 0, "authPassword": "", "dataManagerSleepInitMS": 0, "feedBufferAckThreshold": 0, "feedBufferSizeBytes": 0, "clusterManagerBackoffFactor": 0}, "uuid": ""}
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index default_index created
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for default_index ...
2017-03-30 08:38:44 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ...
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 6
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 171 partitions or lesser
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: pIndexes are distributed across [u'd8181eeaee4eba3c6c7d9e74e3fd7dee']
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range 853-1024
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node d8181eeaee4eba3c6c7d9e74e3fd7dee houses 6 pindexes which serve 1024 partitions
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.wait_for_indexing_complete] Docs in bucket = 100, docs in FTS index 'default_index': 100
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.run_fts_query] Running query {"sort": [{"field": "languages_known", "desc": false, "by": "field", "missing": "first", "mode": "min"}], "indexName": "default_index", "from": 0, "fields": [], "explain": false, "ctl": {"timeout": 60000, "consistency": {"vectors": {}, "level": ""}}, "query": {"disjuncts": [{"field": "name", "match": "Safiya"}, {"field": "name", "match": "Palmer"}]}, "size": 10000000} on node: 127.0.0.1:9200
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [stable_topology_fts.test_sorting_of_results] Hits: 3
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [stable_topology_fts.test_sorting_of_results] Doc IDs: [{u'sort': [u'dutch'], u'index': u'default_index_6b1c81bb13bf8841_13aa53f3', u'score': 3.3025850364727094, u'id': u'emp10000001'}, {u'sort': [u'german'], u'index': u'default_index_6b1c81bb13bf8841_6ddbfb54', u'score': 0.6171131555733937, u'id': u'emp10000071'}, {u'sort': [u'malay'], u'index': u'default_index_6b1c81bb13bf8841_54820232', u'score': 0.6553338097445238, u'id': u'emp10000042'}]
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.tearDown] ==== FTSbasetests cleanup is started for test #1 test_sorting_of_results ====
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.delete] Deleting fulltext-index default_index on 127.0.0.1
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 127.0.0.1 with username:root password:couchbase ssh_key:
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 127.0.0.1
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: /sbin/sysctl -n machdep.cpu.brand_string
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 127.0.0.1: ls /Users/mschoch/Documents/research/cbsource/ns_server/data/n_0/data/@fts |grep default_index*.pindex | wc -l
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.are_index_files_deleted_from_disk] 0
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.delete] Validated: all index files for default_index deleted from disk
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] removing nodes from cluster ...
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] cleanup [ip:127.0.0.1 port:9000 ssh_username:root]
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets [u'default'] on 127.0.0.1
2017-03-30 08:38:49 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket default ...
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 127.0.0.1
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 127.0.0.1 existing buckets : []
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 127.0.0.1:9000
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 127.0.0.1:9000 is running
Cluster instance shutdown with force
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] Removing user 'cbadminbucket'...
2017-03-30 08:38:50 | INFO | MainProcess | test_thread | [fts_base.tearDown] ==== FTSbasetests cleanup is finished for test #1 test_sorting_of_results ===
Cluster instance shutdown with force
ok
----------------------------------------------------------------------
Ran 1 test in 31.512s
OK
summary so far suite fts.stable_topology_fts.StableTopFTS , pass 1 , fail 0
testrunner logs, diags and results are available under /Users/mschoch/Documents/research/cbsource/testrunner/logs/testrunner-17-Mar-30_08-38-18/test_1
Run after suite setup for fts.stable_topology_fts.StableTopFTS.test_sorting_of_results
Thread Cluster_Thread was not properly terminated, will be terminated now.
Thread Cluster_Thread was not properly terminated, will be terminated now.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment