Skip to content

Instantly share code, notes, and snippets.

@elsonrodriguez
Last active November 1, 2016 22:05
Show Gist options
  • Save elsonrodriguez/db55ebc5f818645c6702f7e13444e0ca to your computer and use it in GitHub Desktop.
Save elsonrodriguez/db55ebc5f818645c6702f7e13444e0ca to your computer and use it in GitHub Desktop.
Ceph mount issue

When trying to mount a cephfs volume, my command hangs for minutes.

mount -t ceph -o name=admin,secret=xxx==  ceph-mon.ceph.svc.harbor0.group.company.com:6789:/ /mnt/test/
mount error 5 = Input/output error

Output from the mounting system's logs:

[2948295.425025] libceph: client214643 fsid 80802a78-0c63-4146-8040-c93730f92515
[2948295.427647] libceph: mon2 10.9.219.6:6789 session established
[2948295.428430] libceph: wrong peer, want 10.54.9.212:6800/82, got 10.54.9.212:6800/21
[2948295.428438] libceph: mds0 10.54.9.212:6800 wrong peer at address
[2948296.166278] libceph: wrong peer, want 10.54.9.212:6800/82, got 10.54.9.212:6800/21
[2948296.166284] libceph: mds0 10.54.9.212:6800 wrong peer at address
[2948297.218324] libceph: wrong peer, want 10.54.9.212:6800/82, got 10.54.9.212:6800/21
[2948297.218331] libceph: mds0 10.54.9.212:6800 wrong peer at address
[2948299.398438] libceph: wrong peer, want 10.54.9.212:6800/82, got 10.54.9.212:6800/21
[2948299.398447] libceph: mds0 10.54.9.212:6800 wrong peer at address
root@node-f6dbfdbd:~# mount error 5 = Input/output error

Supposedly ceph cluster is healthy:

ceph status
    cluster 80802a78-0c63-4146-8040-c93730f92515
     health HEALTH_WARN
            too many PGs per OSD (1000 > max 300)
     monmap e15: 3 mons at {ceph-mon-1396930634-0l2ps=10.9.217.3:6789/0,ceph-mon-1396930634-0vjdx=10.9.216.7:6789/0,ceph-mon-1396930634-1v1s4=10.9.219.6:6789/0}
            election epoch 214, quorum 0,1,2 ceph-mon-1396930634-0vjdx,ceph-mon-1396930634-0l2ps,ceph-mon-1396930634-1v1s4
      fsmap e140: 1/1/1 up {0=mds-ceph-mds-0=up:active}
     osdmap e525: 7 osds: 6 up, 6 in
            flags sortbitwise
      pgmap v1417608: 2000 pgs, 15 pools, 13842 MB data, 4193 objects
            165 GB used, 1175 GB / 1340 GB avail
                2000 active+clean
  client io 1515 kB/s wr, 0 op/s rd, 9 op/s wr
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment