Skip to content

Instantly share code, notes, and snippets.

@MalloZup
Created April 9, 2021 09:26
Show Gist options
  • Save MalloZup/335c16043e8ef7a1d692bd0bdc109fbc to your computer and use it in GitHub Desktop.
Save MalloZup/335c16043e8ef7a1d692bd0bdc109fbc to your computer and use it in GitHub Desktop.
dmaiocchi@linux-buft:~/bin/hawk> docker run --ipc=host -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY hawk_test -H 10.162.30.219 -S 10.162.32.9 -s linux --xvfb -t 15-SP2
TEST: test_set_stonith_maintenance: Placing stonith-sbd in maintenance
INFO: stonith-sbd successfully placed in maintenance mode
INFO: Main page. Click on Logout
TEST: verify_stonith_in_maintenance
INFO: ssh command [crm status | grep stonith-sbd] got output [ * stonith-sbd (stonith:external/sbd): Started hana01 (unmanaged)] and error []
INFO: stonith-sbd is unmanaged
TEST: test_disable_stonith_maintenance: Re-activating stonith-sbd
INFO: stonith-sbd successfully reactivated
INFO: Main page. Click on Logout
TEST: test_view_details_first_node: Checking details of first cluster node
INFO: Main page. Click on Nodes
INFO: Main page. Click on Logout
TEST: test_clear_state_first_node
INFO: Main page. Click on Nodes
INFO: Main page. Click on Clear state
INFO: cleared state of first node successfully
INFO: Main page. Click on Logout
TEST: test_set_first_node_maintenance: switching node to maintenance
INFO: Main page. Click on Nodes
INFO: node successfully switched to maintenance mode
INFO: Main page. Click on Logout
TEST: verify_node_maintenance: check cluster node is in maintenance mode
INFO: ssh command [crm status | grep -i node] got output [ * 2 nodes configured
Node List:
* Node hana01: maintenance] and error []
INFO: cluster node set successfully in maintenance mode
TEST: test_disable_maintenance_first_node: switching node to ready
INFO: Main page. Click on Nodes
INFO: node successfully switched to ready mode
INFO: Main page. Click on Logout
TEST: test_add_new_cluster
INFO: Main page. Click on Dashboard
INFO: Main page. Click on Logout
TEST: test_remove_cluster
INFO: Main page. Click on Dashboard
INFO: Successfully removed cluster: [Anderes]
INFO: Main page. Click on Logout
TEST: test_click_on_history
INFO: Main page. Click on History
INFO: Main page. Click on Logout
TEST: test_generate_report: click on Generate report
INFO: Main page. Click on History
INFO: 60 seconds timeout while looking for element [alert-success] by [class name]
INFO: 5 seconds timeout while looking for element [Rename] by [partial link text]
ERROR: failed to generate report
INFO: Main page. Click on Logout
TEST: test_click_on_command_log
INFO: Main page. Click on Command Log
INFO: Main page. Click on Logout
TEST: test_click_on_status
INFO: Main page. Click on Status
INFO: Main page. Click on Logout
TEST: test_add_primitive: Add Resources: Primitive cool_primitive
INFO: Main page. Click on Resource
INFO: Main page. Click on rimitive
INFO: Successfully added primitive [cool_primitive] of class [ocf:heartbeat:anything]
INFO: Main page. Click on Logout
TEST: verify_primitive: check primitive [cool_primitive] exists
INFO: ssh command [crm configure show] got output [node 1084758026: hana01 \
attributes lpa_prd_lpt=1617959788 hana_prd_vhost=hana01 hana_prd_site=Site1 hana_prd_op_mode=logreplay_readaccess hana_prd_srmode=sync hana_prd_remoteHost=hana02 maintenance=off
node 1084758027: hana02 \
attributes lpa_prd_lpt=30 hana_prd_op_mode=logreplay_readaccess hana_prd_vhost=hana02 hana_prd_remoteHost=hana01 hana_prd_site=Site2 hana_prd_srmode=sync
primitive cool_primitive anything \
params binfile=file \
op start timeout=35s interval=0 \
op stop timeout=15s on-fail=stop interval=0 \
op monitor timeout=9s interval=13s \
meta target-role=Started
#####################
# SAP HANA resources
#####################
primitive rsc_SAPHanaTopology_PRD_HDB00 ocf:suse:SAPHanaTopology \
params SID=PRD InstanceNumber=00 \
op monitor interval=10 timeout=600 \
op start interval=0 timeout=600 \
op stop interval=0 timeout=300
primitive rsc_SAPHana_PRD_HDB00 ocf:suse:SAPHana \
params SID=PRD InstanceNumber=00 PREFER_SITE_TAKEOVER=True AUTOMATED_REGISTER=False DUPLICATE_PRIMARY_TIMEOUT=7200 \
op start interval=0 timeout=3600 \
op stop interval=0 timeout=3600 \
op promote interval=0 timeout=3600 \
op monitor interval=60 role=Master timeout=700 \
op monitor interval=61 role=Slave timeout=700
#######################################
# non-production HANA - Cost optimized
#######################################
###############################
# Active/Active HANA resources
###############################
######################################
# prometheus-hanadb_exporter resource
######################################
primitive rsc_exporter_PRD_HDB00 systemd:prometheus-hanadb_exporter@PRD_HDB00 \
op start interval=0 timeout=100 \
op stop interval=0 timeout=100 \
op monitor interval=10 \
meta target-role=Started
#####################################################
# Fencing agents - Native agents for cloud providers
#####################################################
######################################
# Floating IP address resource agents
######################################
primitive rsc_ip_PRD_HDB00 IPaddr2 \
params ip=192.168.24.12 cidr_netmask=24 nic=eth1 \
op start timeout=20 interval=0 \
op stop timeout=20 interval=0 \
op monitor interval=10 timeout=20
primitive stonith-sbd stonith:external/sbd \
params pcmk_delay_max=30s
ms msl_SAPHana_PRD_HDB00 rsc_SAPHana_PRD_HDB00 \
meta clone-max=2 clone-node-max=1 interleave=true
clone cln_SAPHanaTopology_PRD_HDB00 rsc_SAPHanaTopology_PRD_HDB00 \
meta clone-node-max=1 interleave=true
colocation col_exporter_PRD_HDB00 +inf: rsc_exporter_PRD_HDB00:Started msl_SAPHana_PRD_HDB00:Master
colocation col_saphana_ip_PRD_HDB00 2000: rsc_ip_PRD_HDB00:Started msl_SAPHana_PRD_HDB00:Master
order ord_SAPHana_PRD_HDB00 Optional: cln_SAPHanaTopology_PRD_HDB00 msl_SAPHana_PRD_HDB00
property SAPHanaSR: \
hana_prd_site_srHook_Site2=SOK
property cib-bootstrap-options: \
have-watchdog=true \
dc-version="2.0.4+20200616.2deceaa3a-3.3.1-2.0.4+20200616.2deceaa3a" \
cluster-infrastructure=corosync \
cluster-name=hana_cluster \
stonith-enabled=true
rsc_defaults rsc-options: \
resource-stickiness=1000 \
migration-threshold=5000
op_defaults op-options: \
timeout=600 \
record-pending=true] and error []
INFO: primitive [cool_primitive] correctly defined in the cluster configuration
TEST: test_remove_primitive: Remove Primitive: cool_primitive
INFO: Remove Resource: cool_primitive
INFO: Check edit configuration
INFO: Main page. Click on Edit Configuration
INFO: 5 seconds timeout while looking for element [//a[contains(@href, "cool_primitive") and contains(@title, "Delete")]] by [xpath]
INFO: Successfully removed resource [cool_primitive]
INFO: Main page. Click on Logout
TEST: verify_primitive_removed: check primitive [cool_primitive] is removed
INFO: ssh command [crm resource status | grep ocf::heartbeat:anything] got output [] and error []
INFO: primitive successfully removed
TEST: test_add_clone: Adding clone [cool_clone]
INFO: Main page. Click on Resource
INFO: Successfully added clone [cool_clone] of [stonith-sbd]
INFO: Main page. Click on Logout
TEST: test_remove_clone: Remove Clone: cool_clone
INFO: Remove Resource: cool_clone
INFO: Check edit configuration
INFO: Main page. Click on Edit Configuration
INFO: 5 seconds timeout while looking for element [//a[contains(@href, "cool_clone") and contains(@title, "Delete")]] by [xpath]
INFO: Successfully removed resource [cool_clone]
INFO: Main page. Click on Logout
TEST: test_add_group: Adding group [cool_group]
INFO: Main page. Click on Resource
INFO: Successfully added group [cool_group] of [stonith-sbd]
INFO: Main page. Click on Logout
TEST: test_remove_group: Remove Group: cool_group
INFO: Remove Resource: cool_group
INFO: Check edit configuration
INFO: Main page. Click on Edit Configuration
INFO: 5 seconds timeout while looking for element [//a[contains(@href, "cool_group") and contains(@title, "Delete")]] by [xpath]
INFO: Successfully removed resource [cool_group]
INFO: Main page. Click on Logout
TEST: test_click_around_edit_conf
TEST: Will click on Constraints, Nodes, Tags, Alerts and Fencing
INFO: Check edit configuration
INFO: Main page. Click on Edit Configuration
INFO: Main page. Click on Logout
TEST: test_fencing
INFO: Main page. Click on Nodes
INFO: Main page. Click on Fence
INFO: Master node successfully fenced
INFO: Main page. Click on Logout
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment