Skip to content

Instantly share code, notes, and snippets.

@figassis
Last active June 11, 2019 09:43
Show Gist options
  • Save figassis/0a3f499f5e3f79a430c9bd58718fd29f to your computer and use it in GitHub Desktop.
Save figassis/0a3f499f5e3f79a430c9bd58718fd29f to your computer and use it in GitHub Desktop.
Rook Ceph Logs
$ kubectl logs -f rook-ceph-mon-a-74cc6db5c8-8s5l5
debug 2019-06-11 08:55:18.934 7fafa9547180 0 ceph version 14.2.1 (d555a9489eb35f84f2e1ef49b77e19da9d113972) nautilus (stable), process ceph-mon, pid 1
debug 2019-06-11 08:55:18.934 7fafa9547180 0 pidfile_write: ignore empty --pid-file
debug 2019-06-11 08:55:18.974 7fafa9547180 0 load: jerasure load: lrc load: isa
debug 2019-06-11 08:55:18.974 7fafa9547180 0 set rocksdb option compression = kNoCompression
debug 2019-06-11 08:55:18.974 7fafa9547180 0 set rocksdb option level_compaction_dynamic_level_bytes = true
debug 2019-06-11 08:55:18.974 7fafa9547180 0 set rocksdb option write_buffer_size = 33554432
debug 2019-06-11 08:55:18.974 7fafa9547180 0 set rocksdb option compression = kNoCompression
debug 2019-06-11 08:55:18.974 7fafa9547180 0 set rocksdb option level_compaction_dynamic_level_bytes = true
debug 2019-06-11 08:55:18.974 7fafa9547180 0 set rocksdb option write_buffer_size = 33554432
debug 2019-06-11 08:55:18.974 7fafa9547180 1 rocksdb: do_open column families: [default]
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: RocksDB version: 5.17.2
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Git sha rocksdb_build_git_sha:@0@
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Compile date Apr 25 2019
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: DB SUMMARY
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: CURRENT file: CURRENT
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: IDENTITY file: IDENTITY
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: MANIFEST file: MANIFEST-000001 size: 13 Bytes
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: SST files in /var/lib/ceph/mon/ceph-a/store.db dir, Total Num: 0, files:
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Write Ahead Log file in /var/lib/ceph/mon/ceph-a/store.db: 000003.log size: 618 ;
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.error_if_exists: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.create_if_missing: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.paranoid_checks: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.env: 0x55a41eadc740
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.info_log: 0x55a42159a120
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_file_opening_threads: 16
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.statistics: (nil)
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.use_fsync: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_log_file_size: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_manifest_file_size: 1073741824
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.log_file_time_to_roll: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.keep_log_file_num: 1000
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.recycle_log_file_num: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.allow_fallocate: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.allow_mmap_reads: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.allow_mmap_writes: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.use_direct_reads: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.use_direct_io_for_flush_and_compaction: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.create_missing_column_families: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.db_log_dir:
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.wal_dir: /var/lib/ceph/mon/ceph-a/store.db
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.table_cache_numshardbits: 6
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_subcompactions: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_background_flushes: -1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.WAL_ttl_seconds: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.WAL_size_limit_MB: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.manifest_preallocation_size: 4194304
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.is_fd_close_on_exec: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.advise_random_on_open: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.db_write_buffer_size: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.write_buffer_manager: 0x55a4215408a0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.access_hint_on_compaction_start: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.new_table_reader_for_compaction_inputs: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.random_access_max_buffer_size: 1048576
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.use_adaptive_mutex: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.rate_limiter: (nil)
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.sst_file_manager.rate_bytes_per_sec: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.wal_recovery_mode: 2
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.enable_thread_tracking: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.enable_pipelined_write: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.allow_concurrent_memtable_write: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.enable_write_thread_adaptive_yield: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.write_thread_max_yield_usec: 100
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.write_thread_slow_yield_usec: 3
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.row_cache: None
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.wal_filter: None
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.avoid_flush_during_recovery: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.allow_ingest_behind: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.preserve_deletes: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.two_write_queues: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.manual_wal_flush: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_background_jobs: 2
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_background_compactions: -1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.avoid_flush_during_shutdown: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.writable_file_max_buffer_size: 1048576
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.delayed_write_rate : 16777216
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_total_wal_size: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.delete_obsolete_files_period_micros: 21600000000
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.stats_dump_period_sec: 600
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.max_open_files: -1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.bytes_per_sync: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.wal_bytes_per_sync: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Options.compaction_readahead_size: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Compression algorithms supported:
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kZSTDNotFinalCompression supported: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kZSTD supported: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kXpressCompression supported: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kLZ4HCCompression supported: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kLZ4Compression supported: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kBZip2Compression supported: 0
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kZlibCompression supported: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: kSnappyCompression supported: 1
debug 2019-06-11 08:55:18.974 7fafa9547180 4 rocksdb: Fast CRC32 supported: Supported on x86
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/version_set.cc:3406] Recovering from manifest file: MANIFEST-000001
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/column_family.cc:475] --------------- Options for column family [default]:
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.comparator: leveldb.BytewiseComparator
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.merge_operator:
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_filter: None
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_filter_factory: None
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.memtable_factory: SkipListFactory
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.table_factory: BlockBasedTable
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: table_factory options: flush_block_policy_factory: FlushBlockBySizePolicyFactory (0x55a420723668)
cache_index_and_filter_blocks: 1
cache_index_and_filter_blocks_with_high_priority: 1
pin_l0_filter_and_index_blocks_in_cache: 1
pin_top_level_index_and_filter: 1
index_type: 0
hash_index_allow_collision: 1
checksum: 1
no_block_cache: 0
block_cache: 0x55a421589cf0
block_cache_name: BinnedLRUCache
block_cache_options:
capacity : 536870912
num_shard_bits : 4
strict_capacity_limit : 0
high_pri_pool_ratio: 0.000
block_cache_compressed: (nil)
persistent_cache: (nil)
block_size: 4096
block_size_deviation: 10
block_restart_interval: 16
index_block_restart_interval: 1
metadata_block_size: 4096
partition_filters: 0
use_delta_encoding: 1
filter_policy: rocksdb.BuiltinBloomFilter
whole_key_filtering: 1
verify_compression: 0
read_amp_bytes_per_bit: 0
format_version: 2
enable_index_compression: 1
block_align: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.write_buffer_size: 33554432
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_write_buffer_number: 2
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression: NoCompression
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression: Disabled
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.prefix_extractor: nullptr
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.memtable_insert_with_hint_prefix_extractor: nullptr
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.num_levels: 7
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.min_write_buffer_number_to_merge: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_write_buffer_number_to_maintain: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression_opts.window_bits: -14
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression_opts.level: 32767
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression_opts.strategy: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression_opts.max_dict_bytes: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression_opts.zstd_max_train_bytes: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bottommost_compression_opts.enabled: false
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression_opts.window_bits: -14
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression_opts.level: 32767
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression_opts.strategy: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression_opts.max_dict_bytes: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression_opts.zstd_max_train_bytes: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compression_opts.enabled: false
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.level0_file_num_compaction_trigger: 4
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.level0_slowdown_writes_trigger: 20
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.level0_stop_writes_trigger: 36
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.target_file_size_base: 67108864
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.target_file_size_multiplier: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_base: 268435456
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.level_compaction_dynamic_level_bytes: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier: 10.000000
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[0]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[1]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[2]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[3]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[4]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[5]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_bytes_for_level_multiplier_addtl[6]: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_sequential_skip_in_iterations: 8
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_compaction_bytes: 1677721600
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.arena_block_size: 4194304
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.soft_pending_compaction_bytes_limit: 68719476736
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.hard_pending_compaction_bytes_limit: 274877906944
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.rate_limit_delay_max_milliseconds: 100
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.disable_auto_compactions: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_style: kCompactionStyleLevel
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_pri: kByCompensatedSize
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_universal.size_ratio: 1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_universal.min_merge_width: 2
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_universal.max_merge_width: 4294967295
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_universal.max_size_amplification_percent: 200
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_universal.compression_size_percent: -1
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_universal.stop_style: kCompactionStopStyleTotalSize
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_fifo.max_table_files_size: 1073741824
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_fifo.allow_compaction: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.compaction_options_fifo.ttl: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.table_properties_collectors:
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.inplace_update_support: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.inplace_update_num_locks: 10000
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.memtable_prefix_bloom_size_ratio: 0.000000
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.memtable_huge_page_size: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.bloom_locality: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.max_successive_merges: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.optimize_filters_for_hits: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.paranoid_file_checks: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.force_consistency_checks: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.report_bg_io_stats: 0
debug 2019-06-11 08:55:18.978 7fafa9547180 4 rocksdb: Options.ttl: 0
debug 2019-06-11 08:55:18.982 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/version_set.cc:3610] Recovered from manifest file:/var/lib/ceph/mon/ceph-a/store.db/MANIFEST-000001 succeeded,manifest_file_number is 1, next_file_number is 3, last_sequence is 0, log_number is 0,prev_log_number is 0,max_column_family is 0,min_log_number_to_keep is 0
debug 2019-06-11 08:55:18.982 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/version_set.cc:3618] Column family [default] (ID 0), log number is 0
debug 2019-06-11 08:55:18.982 7fafa9547180 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1560243318986357, "job": 1, "event": "recovery_started", "log_files": [3]}
debug 2019-06-11 08:55:18.982 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/db_impl_open.cc:561] Recovering log #3 mode 2
debug 2019-06-11 08:55:18.986 7fafa9547180 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1560243318988456, "cf_name": "default", "job": 1, "event": "table_file_creation", "file_number": 4, "file_size": 1376, "table_properties": {"data_size": 630, "index_size": 28, "filter_size": 23, "raw_key_size": 115, "raw_average_key_size": 23, "raw_value_size": 507, "raw_average_value_size": 101, "num_data_blocks": 1, "num_entries": 5, "filter_policy_name": "rocksdb.BuiltinBloomFilter", "kDeletedKeys": "0", "kMergeOperands": "0"}}
debug 2019-06-11 08:55:18.986 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/version_set.cc:2936] Creating manifest 5
debug 2019-06-11 08:55:18.986 7fafa9547180 4 rocksdb: EVENT_LOG_v1 {"time_micros": 1560243318990801, "job": 1, "event": "recovery_finished"}
debug 2019-06-11 08:55:18.990 7fafa9547180 4 rocksdb: [/home/jenkins-build/build/workspace/ceph-build/ARCH/x86_64/AVAILABLE_ARCH/x86_64/AVAILABLE_DIST/centos7/DIST/centos7/MACHINE_SIZE/huge/release/14.2.1/rpm/el7/BUILD/ceph-14.2.1/src/rocksdb/db/db_impl_open.cc:1287] DB pointer 0x55a421646000
debug 2019-06-11 08:55:18.994 7fafa9547180 0 starting mon.a rank 0 at public addrs [v2:10.233.31.119:3300/0,v1:10.233.31.119:6789/0] at bind addrs [v2:10.233.90.13:3300/0,v1:10.233.90.13:6789/0] mon_data /var/lib/ceph/mon/ceph-a fsid fcda2921-e04c-4d67-95e5-57cfc0ced914
debug 2019-06-11 08:55:18.994 7fafa9547180 1 mon.a@-1(???) e0 preinit fsid fcda2921-e04c-4d67-95e5-57cfc0ced914
debug 2019-06-11 08:55:18.994 7fafa9547180 1 mon.a@-1(???) e0 initial_members a, filtering seed monmap
debug 2019-06-11 08:55:18.998 7fafa9547180 0 mon.a@-1(probing) e0 my rank is now 0 (was -1)
debug 2019-06-11 08:55:18.998 7fafa9547180 1 mon.a@0(probing) e0 win_standalone_election
debug 2019-06-11 08:55:18.998 7fafa9547180 1 mon.a@0(probing).elector(0) init, first boot, initializing epoch at 1
debug 2019-06-11 08:55:18.998 7fafa9547180 -1 mon.a@0(electing) e0 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''
debug 2019-06-11 08:55:18.998 7fafa9547180 0 log_channel(cluster) log [INF] : mon.a is new leader, mons a in quorum (ranks 0)
debug 2019-06-11 08:55:19.002 7fafa9547180 1 mon.a@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
debug 2019-06-11 08:55:19.002 7fafa9547180 1 mon.a@0(leader).osd e0 create_pending setting full_ratio = 0.95
debug 2019-06-11 08:55:19.002 7fafa9547180 1 mon.a@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
debug 2019-06-11 08:55:19.002 7fafa9547180 1 mon.a@0(leader).osd e0 do_prune osdmap full prune enabled
debug 2019-06-11 08:55:19.002 7fafa9547180 1 mon.a@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
debug 2019-06-11 08:55:19.002 7fafa9547180 1 mon.a@0(leader) e0 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={4=support erasure code pools,5=new-style osdmap encoding,6=support isa/lrc erasure code,7=support shec erasure code}
debug 2019-06-11 08:55:19.010 7faf8f032700 1 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
debug 2019-06-11 08:55:19.010 7faf8f032700 1 mon.a@0(probing) e1 win_standalone_election
debug 2019-06-11 08:55:19.010 7faf8f032700 1 mon.a@0(probing).elector(2) init, last seen epoch 2
debug 2019-06-11 08:55:19.010 7faf8f032700 -1 mon.a@0(electing) e1 failed to get devid for : udev_device_new_from_subsystem_sysname failed on ''
debug 2019-06-11 08:55:19.014 7faf8f032700 0 log_channel(cluster) log [INF] : mon.a is new leader, mons a in quorum (ranks 0)
debug 2019-06-11 08:55:19.014 7faf8f032700 0 log_channel(cluster) log [DBG] : monmap e1: 1 mons at {a=[v2:10.233.31.119:3300/0,v1:10.233.31.119:6789/0]}
debug 2019-06-11 08:55:19.014 7faf8f032700 1 mon.a@0(leader).osd e0 create_pending setting backfillfull_ratio = 0.9
debug 2019-06-11 08:55:19.014 7faf8f032700 1 mon.a@0(leader).osd e0 create_pending setting full_ratio = 0.95
debug 2019-06-11 08:55:19.014 7faf8f032700 1 mon.a@0(leader).osd e0 create_pending setting nearfull_ratio = 0.85
debug 2019-06-11 08:55:19.014 7faf8f032700 1 mon.a@0(leader).osd e0 do_prune osdmap full prune enabled
debug 2019-06-11 08:55:19.014 7faf8f032700 1 mon.a@0(leader).osd e0 encode_pending skipping prime_pg_temp; mapping job did not start
debug 2019-06-11 08:55:19.014 7faf8f032700 1 mon.a@0(leader) e1 _apply_compatset_features enabling new quorum features: compat={},rocompat={},incompat={8=support monmap features,9=luminous ondisk layout,10=mimic ondisk layout,11=nautilus ondisk layout}
debug 2019-06-11 08:55:19.018 7faf8f032700 0 mon.a@0(leader).mds e1 new map
debug 2019-06-11 08:55:19.018 7faf8f032700 0 mon.a@0(leader).mds e1 print_map
e1
enable_multiple, ever_enabled_multiple: 0,0
compat: compat={},rocompat={},incompat={1=base v0.20,2=client writeable ranges,3=default file layouts on dirs,4=dir inode in separate object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no anchor table,9=file layout v2,10=snaprealm v2}
legacy client fscid: -1
No filesystems configured
debug 2019-06-11 08:55:19.018 7faf8f032700 1 mon.a@0(leader).paxosservice(auth 0..0) refresh upgraded, format 3 -> 0
debug 2019-06-11 08:55:19.018 7faf8f032700 0 log_channel(cluster) log [DBG] : fsmap
debug 2019-06-11 08:55:19.022 7faf8f032700 1 mon.a@0(leader).osd e1 e1: 0 total, 0 up, 0 in
debug 2019-06-11 08:55:19.022 7faf8f032700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
debug 2019-06-11 08:55:19.022 7faf8f032700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
debug 2019-06-11 08:55:19.022 7faf8f032700 0 mon.a@0(leader).osd e1 crush map has features 3314932999778484224, adjusting msgr requires
debug 2019-06-11 08:55:19.022 7faf8f032700 0 mon.a@0(leader).osd e1 crush map has features 288514050185494528, adjusting msgr requires
cluster 2019-06-11 08:55:19.018982 mon.a (mon.0) 0 : [INF] mkfs fcda2921-e04c-4d67-95e5-57cfc0ced914
debug 2019-06-11 08:55:19.022 7faf8f032700 1 mon.a@0(leader).paxosservice(auth 1..1) refresh upgraded, format 0 -> 3
debug 2019-06-11 08:55:19.026 7faf8f032700 0 log_channel(cluster) log [DBG] : osdmap e1: 0 total, 0 up, 0 in
debug 2019-06-11 08:55:19.026 7faf8f032700 0 log_channel(cluster) log [DBG] : mgrmap e1: no daemons active
cluster 2019-06-11 08:55:19.003539 mon.a (mon.0) 1 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0)
cluster 2019-06-11 08:55:19.016667 mon.a (mon.0) 2 : cluster [INF] mon.a is new leader, mons a in quorum (ranks 0)
cluster 2019-06-11 08:55:19.017381 mon.a (mon.0) 3 : cluster [DBG] monmap e1: 1 mons at {a=[v2:10.233.31.119:3300/0,v1:10.233.31.119:6789/0]}
cluster 2019-06-11 08:55:19.022683 mon.a (mon.0) 4 : cluster [DBG] fsmap
cluster 2019-06-11 08:55:19.030173 mon.a (mon.0) 5 : cluster [DBG] osdmap e1: 0 total, 0 up, 0 in
cluster 2019-06-11 08:55:19.031238 mon.a (mon.0) 6 : cluster [DBG] mgrmap e1: no daemons active
$ kubectl exec -it rook-ceph-mon-a-74cc6db5c8-8s5l5 ceph daemon mon.a mon_status
{
"name": "a",
"rank": 0,
"state": "leader",
"election_epoch": 3,
"quorum": [
0
],
"quorum_age": 2631,
"features": {
"required_con": "2449958747315912708",
"required_mon": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus"
],
"quorum_con": "4611087854031667199",
"quorum_mon": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus"
]
},
"outside_quorum": [],
"extra_probe_peers": [],
"sync_provider": [],
"monmap": {
"epoch": 1,
"fsid": "fcda2921-e04c-4d67-95e5-57cfc0ced914",
"modified": "2019-06-11 08:55:18.183575",
"created": "2019-06-11 08:55:18.183575",
"min_mon_release": 14,
"min_mon_release_name": "nautilus",
"features": {
"persistent": [
"kraken",
"luminous",
"mimic",
"osdmap-prune",
"nautilus"
],
"optional": []
},
"mons": [
{
"rank": 0,
"name": "a",
"public_addrs": {
"addrvec": [
{
"type": "v2",
"addr": "10.233.31.119:3300",
"nonce": 0
},
{
"type": "v1",
"addr": "10.233.31.119:6789",
"nonce": 0
}
]
},
"addr": "10.233.31.119:6789/0",
"public_addr": "10.233.31.119:6789/0"
}
]
},
"feature_map": {
"mon": [
{
"features": "0x3ffddff8ffacffff",
"release": "luminous",
"num": 1
}
],
"client": [
{
"features": "0x3ffddff8ffacffff",
"release": "luminous",
"num": 1
}
]
}
}
$ kubectl logs -f rook-ceph-operator-7cd5d8bd4c-pclxp
2019-06-11 08:55:01.746020 I | rookcmd: starting Rook v1.0.2 with arguments '/usr/local/bin/rook ceph operator'
2019-06-11 08:55:01.746126 I | rookcmd: flag values: --alsologtostderr=false, --csi-attacher-image=quay.io/k8scsi/csi-attacher:v1.0.1, --csi-cephfs-image=quay.io/cephcsi/cephfsplugin:v1.0.0, --csi-cephfs-plugin-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin.yaml, --csi-cephfs-provisioner-template-path=/etc/ceph-csi/cephfs/csi-cephfsplugin-provisioner.yaml, --csi-enable-cephfs=false, --csi-enable-rbd=false, --csi-provisioner-image=quay.io/k8scsi/csi-provisioner:v1.0.1, --csi-rbd-image=quay.io/cephcsi/rbdplugin:v1.0.0, --csi-rbd-plugin-template-path=/etc/ceph-csi/rbd/csi-rbdplugin.yaml, --csi-rbd-provisioner-template-path=/etc/ceph-csi/rbd/csi-rbdplugin-provisioner.yaml, --csi-registrar-image=quay.io/k8scsi/csi-node-driver-registrar:v1.0.2, --csi-snapshotter-image=quay.io/k8scsi/csi-snapshotter:v1.0.1, --help=false, --log-flush-frequency=5s, --log-level=INFO, --log_backtrace_at=:0, --log_dir=, --log_file=, --logtostderr=true, --mon-healthcheck-interval=45s, --mon-out-timeout=10m0s, --skip_headers=false, --stderrthreshold=2, --v=0, --vmodule=
2019-06-11 08:55:01.747481 I | cephcmd: starting operator
2019-06-11 08:55:01.810858 I | op-agent: getting flexvolume dir path from FLEXVOLUME_DIR_PATH env var
2019-06-11 08:55:01.810959 I | op-agent: discovered flexvolume dir path from source env var. value: /var/lib/kubelet/volume-plugins
2019-06-11 08:55:01.810989 I | op-agent: no agent mount security mode given, defaulting to 'Any' mode
2019-06-11 08:55:01.826286 I | op-agent: rook-ceph-agent daemonset started
2019-06-11 08:55:01.840635 I | op-discover: rook-discover daemonset started
2019-06-11 08:55:01.850846 I | operator: rook-provisioner rook.io/block started using rook.io flex vendor dir
2019-06-11 08:55:01.851278 I | operator: rook-provisioner ceph.rook.io/block started using ceph.rook.io flex vendor dir
2019-06-11 08:55:01.851330 I | operator: Watching the current namespace for a cluster CRD
2019-06-11 08:55:01.851345 I | op-cluster: start watching clusters in all namespaces
2019-06-11 08:55:01.851383 I | op-cluster: Enabling hotplug orchestration: ROOK_DISABLE_DEVICE_HOTPLUG=false
I0611 08:55:01.852397 6 leaderelection.go:217] attempting to acquire leader lease rook-ceph-system/rook.io-block...
I0611 08:55:01.852432 6 leaderelection.go:217] attempting to acquire leader lease rook-ceph-system/ceph.rook.io-block...
2019-06-11 08:55:01.894150 I | op-cluster: starting cluster in namespace rook-ceph-system
2019-06-11 08:55:01.898603 I | op-cluster: skipping watching for legacy rook cluster events (legacy cluster CRD probably doesn't exist): the server could not find the requested resource (get clusters.ceph.rook.io)
2019-06-11 08:55:01.902496 I | op-cluster: Cluster %s is not ready. Skipping orchestration.rook-ceph-system
2019-06-11 08:55:01.902516 I | op-cluster: Cluster %s is not ready. Skipping orchestration.rook-ceph-system
2019-06-11 08:55:01.902520 I | op-cluster: Cluster %s is not ready. Skipping orchestration.rook-ceph-system
2019-06-11 08:55:07.904319 I | op-k8sutil: verified the ownerref can be set on resources
2019-06-11 08:55:07.922247 I | op-k8sutil: waiting for job rook-ceph-detect-version to complete...
2019-06-11 08:55:12.982368 I | op-cluster: Detected ceph image version: 14.2.1 nautilus
2019-06-11 08:55:12.982397 I | op-cluster: CephCluster rook-ceph-system status: Creating
2019-06-11 08:55:13.000465 I | op-mon: start running mons
2019-06-11 08:55:13.015962 I | op-mon: saved mon endpoints to config map map[data: maxMonId:-1 mapping:{"node":{},"port":{}}]
2019-06-11 08:55:13.138530 I | cephconfig: writing config file /var/lib/rook/rook-ceph-system/rook-ceph-system.config
2019-06-11 08:55:13.138784 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-06-11 08:55:13.139009 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-system
2019-06-11 08:55:14.135501 I | op-mon: targeting the min mon count 3 since there are only 3 available nodes
2019-06-11 08:55:14.537190 I | op-mon: Found 3 running nodes without mons
2019-06-11 08:55:14.537232 I | op-mon: creating mon a
2019-06-11 08:55:14.745147 I | op-mon: mon a endpoint are [v2:10.233.31.119:3300,v1:10.233.31.119:6789]
2019-06-11 08:55:15.135343 I | op-mon: saved mon endpoints to config map map[data:a=10.233.31.119:6789 maxMonId:2 mapping:{"node":{"a":{"Name":"node1","Hostname":"node1","Address":"10.0.1.1"},"b":{"Name":"node2","Hostname":"node2","Address":"10.0.1.2"},"c":{"Name":"node3","Hostname":"node3","Address":"10.0.1.3"}},"port":{}}]
2019-06-11 08:55:16.337593 I | cephconfig: writing config file /var/lib/rook/rook-ceph-system/rook-ceph-system.config
2019-06-11 08:55:16.338014 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-06-11 08:55:16.338306 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-system
2019-06-11 08:55:16.338851 I | cephconfig: writing config file /var/lib/rook/rook-ceph-system/rook-ceph-system.config
2019-06-11 08:55:16.339154 I | cephconfig: copying config to /etc/ceph/ceph.conf
2019-06-11 08:55:16.339665 I | cephconfig: generated admin config in /var/lib/rook/rook-ceph-system
2019-06-11 08:55:16.361456 I | op-mon: mons created: 1
2019-06-11 08:55:16.361488 I | op-mon: waiting for mon quorum with [a]
2019-06-11 08:55:16.538946 I | op-mon: mon a is not yet running
2019-06-11 08:55:16.539131 I | op-mon: mons running: []
I0611 08:55:19.425405 6 leaderelection.go:227] successfully acquired lease rook-ceph-system/ceph.rook.io-block
I0611 08:55:19.426331 6 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-system", Name:"ceph.rook.io-block", UID:"3744753b-8c1c-11e9-a696-4a36a9b78c69", APIVersion:"v1", ResourceVersion:"30479", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-7cd5d8bd4c-pclxp_99347830-8c26-11e9-8975-da80af105a45 became leader
I0611 08:55:19.426397 6 controller.go:769] Starting provisioner controller ceph.rook.io/block_rook-ceph-operator-7cd5d8bd4c-pclxp_99347830-8c26-11e9-8975-da80af105a45!
I0611 08:55:19.526830 6 controller.go:818] Started provisioner controller ceph.rook.io/block_rook-ceph-operator-7cd5d8bd4c-pclxp_99347830-8c26-11e9-8975-da80af105a45!
I0611 08:55:19.643286 6 leaderelection.go:227] successfully acquired lease rook-ceph-system/rook.io-block
I0611 08:55:19.643665 6 controller.go:769] Starting provisioner controller rook.io/block_rook-ceph-operator-7cd5d8bd4c-pclxp_99345280-8c26-11e9-8975-da80af105a45!
I0611 08:55:19.644645 6 event.go:209] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"rook-ceph-system", Name:"rook.io-block", UID:"37485d39-8c1c-11e9-a696-4a36a9b78c69", APIVersion:"v1", ResourceVersion:"30485", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' rook-ceph-operator-7cd5d8bd4c-pclxp_99345280-8c26-11e9-8975-da80af105a45 became leader
I0611 08:55:20.043960 6 controller.go:818] Started provisioner controller rook.io/block_rook-ceph-operator-7cd5d8bd4c-pclxp_99345280-8c26-11e9-8975-da80af105a45!
2019-06-11 08:55:21.547501 I | op-mon: mons running: [a]
2019-06-11 08:55:21.547969 I | exec: Running command: ceph mon_status --connect-timeout=15 --cluster=rook-ceph-system --conf=/var/lib/rook/rook-ceph-system/rook-ceph-system.config --keyring=/var/lib/rook/rook-ceph-system/client.admin.keyring --format json --out-file /tmp/315681822
W0611 09:03:57.911306 6 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:165: watch of *v1.ConfigMap ended with: too old resource version: 30368 (31791)
2019-06-11 09:03:58.914906 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:03:58.915220 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:03:58.915417 I | op-cluster: device lists are equal. skipping orchestration
W0611 09:09:42.920002 6 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:165: watch of *v1.ConfigMap ended with: too old resource version: 32323 (33026)
2019-06-11 09:09:43.923483 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:09:43.923800 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:09:43.924012 I | op-cluster: device lists are equal. skipping orchestration
W0611 09:16:15.927884 6 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:165: watch of *v1.ConfigMap ended with: too old resource version: 33546 (34460)
2019-06-11 09:16:16.932008 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:16:16.932112 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:16:16.932192 I | op-cluster: device lists are equal. skipping orchestration
W0611 09:22:22.936151 6 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:165: watch of *v1.ConfigMap ended with: too old resource version: 35005 (35813)
2019-06-11 09:22:23.939110 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:22:23.939251 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:22:23.939355 I | op-cluster: device lists are equal. skipping orchestration
W0611 09:30:42.942891 6 reflector.go:289] github.com/rook/rook/pkg/operator/ceph/cluster/controller.go:165: watch of *v1.ConfigMap ended with: too old resource version: 36318 (37602)
2019-06-11 09:30:43.946137 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:30:43.946484 I | op-cluster: device lists are equal. skipping orchestration
2019-06-11 09:30:43.946631 I | op-cluster: device lists are equal. skipping orchestration
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment