Skip to content

Instantly share code, notes, and snippets.

@ideepika
Created March 12, 2021 20:36
Show Gist options
  • Select an option

  • Save ideepika/ac252d45035e65065dc60aaab742a5e4 to your computer and use it in GitHub Desktop.

Select an option

Save ideepika/ac252d45035e65065dc60aaab742a5e4 to your computer and use it in GitHub Desktop.
2021-03-12T01:56:27.042 INFO:tasks.ceph.osd.0.smithi160.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.9-136-gdbb79e05/rpm/el8/BUILD/ceph-15.2.9-136-gdbb79e05/src/osd/PG.cc: In function 'virtual void PG::on_active_advmap(const OSDMapRef&)' thread 7f759466d700 time 2021-03-12T01:56:27.030656+0000
2021-03-12T01:56:27.042 INFO:tasks.ceph.osd.0.smithi160.stderr:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.9-136-gdbb79e05/rpm/el8/BUILD/ceph-15.2.9-136-gdbb79e05/src/osd/PG.cc: 1689: FAILED ceph_assert(!bad || !cct->_conf->osd_debug_verify_cached_snaps)
2021-03-12T01:56:27.042 INFO:tasks.ceph.osd.0.smithi160.stderr:2021-03-12T01:56:27.029+0000 7f759466d700 -1 osd.0 pg_epoch: 119 pg[57.d( v 118'71 (0'0,118'71] local-lis/les=75/76 n=31 ec=49/49 lis/c=75/75 les/c/f=76/76/0 sis=75) [0,6] r=0 lpr=75 crt=118'71 lcod 118'69 mlcod 118'69 active+clean+snaptrim trimq=[4~4](4)] on_active_advmap removed_snaps already contains [4~1]
2021-03-12T01:56:27.043 INFO:tasks.ceph.osd.0.smithi160.stderr: ceph version 15.2.9-136-gdbb79e05 (dbb79e0547db3abf076b9bc9b6ad97ede0519a0e) octopus (stable)
2021-03-12T01:56:27.044 INFO:tasks.ceph.osd.0.smithi160.stderr: 1: (ceph::__ceph_assert_fail(char const*, char const*, int, char const*)+0x158) [0x55ea31d46bee]
2021-03-12T01:56:27.044 INFO:tasks.ceph.osd.0.smithi160.stderr: 2: (()+0x506e08) [0x55ea31d46e08]
2021-03-12T01:56:27.044 INFO:tasks.ceph.osd.0.smithi160.stderr: 3: (PG::on_active_advmap(std::shared_ptr<OSDMap const> const&)+0x1061) [0x55ea31eccc51]
2021-03-12T01:56:27.044 INFO:tasks.ceph.osd.0.smithi160.stderr: 4: (PeeringState::Active::react(PeeringState::AdvMap const&)+0x147) [0x55ea320bc207]
2021-03-12T01:56:27.045 INFO:tasks.ceph.osd.0.smithi160.stderr: 5: (boost::statechart::simple_state<PeeringState::Active, PeeringState::Primary, PeeringState::Activating, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x185) [0x55ea320efd55]
2021-03-12T01:56:27.045 INFO:tasks.ceph.osd.0.smithi160.stderr: 6: (boost::statechart::simple_state<PeeringState::Clean, PeeringState::Active, boost::mpl::list<mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na, mpl_::na>, (boost::statechart::history_mode)0>::react_impl(boost::statechart::event_base const&, void const*)+0x3e) [0x55ea320f6bae]
2021-03-12T01:56:27.045 INFO:tasks.ceph.osd.0.smithi160.stderr: 7: (PeeringState::advance_map(std::shared_ptr<OSDMap const>, std::shared_ptr<OSDMap const>, std::vector<int, std::allocator<int> >&, int, std::vector<int, std::allocator<int> >&, int, PeeringCtx&)+0x1ff) [0x55ea3209c83f]
2021-03-12T01:56:27.045 INFO:tasks.ceph.osd.0.smithi160.stderr: 8: (PG::handle_advance_map(std::shared_ptr<OSDMap const>, std::shared_ptr<OSDMap const>, std::vector<int, std::allocator<int> >&, int, std::vector<int, std::allocator<int> >&, int, PeeringCtx&)+0x1e6) [0x55ea31ede856]
2021-03-12T01:56:27.046 INFO:tasks.ceph.osd.0.smithi160.stderr: 9: (OSD::advance_pg(unsigned int, PG*, ThreadPool::TPHandle&, PeeringCtx&)+0x313) [0x55ea31e528f3]
2021-03-12T01:56:27.046 INFO:tasks.ceph.osd.0.smithi160.stderr: 10: (OSD::dequeue_peering_evt(OSDShard*, PG*, std::shared_ptr<PGPeeringEvent>, ThreadPool::TPHandle&)+0xa4) [0x55ea31e54a44]
/ceph/teuthology-archive/yuriw-2021-03-09_20:27:38-rados-wip-yuri2-testing-2021-03-09-1006-octopus-distro-basic-smithi/5951038/teuthology.log
hold: https://github.com/ceph/ceph/pull/39360
hey @ifed01 can you take a look:
```
2021-03-11T21:28:39.818 INFO:teuthology.orchestra.run.smithi179.stderr:available_objects: 1 in_flight_objects: 0 total objects: 1 in_flight 0
2021-03-11T21:28:42.547 INFO:teuthology.orchestra.run.smithi179.stdout:/home/jenkins-build/build/workspace/ceph-dev-new-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos8/DIST/centos8/MACHINE_SIZE/gigantic/release/15.2.9-136-gdbb79e05/rpm/el8/BUILD/ceph-15.2.9-136-gdbb79e05/src/test/objectstore/store_test.cc:6520: Failure
2021-03-11T21:28:42.550 INFO:teuthology.orchestra.run.smithi179.stdout:Expected: (res_stat.allocated) <= (max_object_size), actual: 4259840 vs 4194304
2021-03-11T21:28:42.948 INFO:teuthology.orchestra.run.smithi179.stdout:==> rm -r bluestore.test_temp_dir
2021-03-11T21:28:43.059 INFO:teuthology.orchestra.run.smithi179.stdout:[ FAILED ] ObjectStore/StoreTestSpecificAUSize.Many4KWritesTest/2, where GetParam() = "bluestore" (7699 ms)
```
/ceph/teuthology-archive/yuriw-2021-03-09_20:27:38-rados-wip-yuri2-testing-2021-03-09-1006-octopus-distro-basic-smithi/5950471/teuthology.log
-----------------------------------------------------------
2021-03-12T00:55:45.754 DEBUG:teuthology.orchestra.run.smithi035:> sudo adjust-ulimits ceph-coverage /home/ubuntu/cephtest/archive/coverage timeout 120 ceph --cluster ceph --log-early mon scrub
2021-03-12T00:55:46.080 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.077+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : scrub mismatch
2021-03-12T00:55:46.081 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.077+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {auth=16,config=2,health=10,logm=72} crc {auth=1991235939,config=3140514009,health=4291592628,logm=1315153905})
2021-03-12T00:55:46.081 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.077+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.1 ScrubResult(keys {auth=15,config=2,health=11,logm=72} crc {auth=2978531865,config=3140514009,health=154278552,logm=1264152812})
2021-03-12T00:55:46.081 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.077+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : scrub mismatch
2021-03-12T00:55:46.082 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.077+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {auth=16,config=2,health=10,logm=72} crc {auth=1991235939,config=3140514009,health=4291592628,logm=1315153905})
2021-03-12T00:55:46.082 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.077+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.2 ScrubResult(keys {auth=16,config=1,health=11,logm=72} crc {auth=1991235939,config=1228700967,health=154278552,logm=3488265579})
2021-03-12T00:55:46.083 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.080+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : scrub mismatch
2021-03-12T00:55:46.084 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.080+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {logm=100} crc {logm=92674063})
2021-03-12T00:55:46.084 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.080+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.1 ScrubResult(keys {logm=100} crc {logm=2520843550})
2021-03-12T00:55:46.085 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.080+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : scrub mismatch
2021-03-12T00:55:46.085 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.080+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {logm=100} crc {logm=92674063})
2021-03-12T00:55:46.085 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.080+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.2 ScrubResult(keys {logm=100} crc {logm=4069442593})
2021-03-12T00:55:46.085 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.081+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : scrub mismatch
2021-03-12T00:55:46.086 INFO:tasks.ceph.mon.a.smithi035.stderr:2021-03-12T00:55:46.081+0000 7f9d36096700 -1 log_channel(cluster) log [ERR] : mon.0 ScrubResult(keys {logm=14,mdsmap=3,mgr=7,mgr_command_descs=1,mgr_metadata=1,mgrstat=8,mon_config_key=4,monmap=3,osd_metadata=6,osd_pg_creating=1,osdmap=52} crc {logm=1402567251,mdsmap=2052314213,mgr=2607564598,mgr_command_descs=4099536792,mgr_metadata=1125149981,mgrstat=173837168,mon_config_key=1336852348,monmap=710912603,osd_metadata=3219530233,osd_pg_creating=64926272,osdmap=2275106541})
/ceph/teuthology-archive/yuriw-2021-03-09_20:28:34-rados-wip-yuri4-testing-2021-03-09-1006-octopus-distro-basic-smithi/5950878/teuthology.log
related: https://tracker.ceph.com/issues/48440
2021-03-12T02:21:24.697 INFO:tasks.workunit.client.0.smithi014.stderr:test_rados.TestWatchNotify.test ... FAIL
2021-03-12T02:21:24.697 INFO:tasks.workunit.client.0.smithi014.stderr:Traceback (most recent call last):
2021-03-12T02:21:24.698 INFO:tasks.workunit.client.0.smithi014.stderr: File "rados.pyx", line 799, in rados.Rados.require_state
2021-03-12T02:21:24.698 INFO:tasks.workunit.client.0.smithi014.stderr:rados.RadosStateError: RADOS rados state (You cannot perform that operation on a Rados object in state shutdown.)
2021-03-12T02:21:24.698 INFO:tasks.workunit.client.0.smithi014.stderr:Exception ignored in: 'rados.Watch.__dealloc__'
2021-03-12T02:21:24.699 INFO:tasks.workunit.client.0.smithi014.stderr:Traceback (most recent call last):
2021-03-12T02:21:24.699 INFO:tasks.workunit.client.0.smithi014.stderr: File "rados.pyx", line 799, in rados.Rados.require_state
2021-03-12T02:21:24.699 INFO:tasks.workunit.client.0.smithi014.stderr:rados.RadosStateError: RADOS rados state (You cannot perform that operation on a Rados object in state shutdown.)
2021-03-12T02:21:24.699 INFO:tasks.workunit.client.0.smithi014.stderr:Traceback (most recent call last):
2021-03-12T02:21:24.699 INFO:tasks.workunit.client.0.smithi014.stderr: File "rados.pyx", line 799, in rados.Rados.require_state
2021-03-12T02:21:24.700 INFO:tasks.workunit.client.0.smithi014.stderr:rados.RadosStateError: RADOS rados state (You cannot perform that operation on a Rados object in state shutdown.)
2021-03-12T02:21:24.700 INFO:tasks.workunit.client.0.smithi014.stderr:Exception ignored in: 'rados.Watch.__dealloc__'
2021-03-12T02:21:24.700 INFO:tasks.workunit.client.0.smithi014.stderr:Traceback (most recent call last):
is this harmless: https://tracker.ceph.com/issues/45721 + related
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment