在ceph.conf中的client域中增加如下:
admin_socket = /var/run/ceph/guests/$cluster-$type.$id.$pid.$cctid.asoklog_file = /var/log/qemu/qemu-guest-$pid.log创建log目录和unix socket目录:
mkdir -p /var/run/ceph/guests/ /var/log/qemu/
修改上述目录权限:
chown qemu:qemu /var/log/qemu/ /var/run/ceph/guests
virsh重启虚机:
[root@nova10ceph]# virsh shutdown instance-000005eaDomain instance-000005ea is being shutdown[root@nova10ceph]# virsh start instance-000005eaDomain instance-000005ea started
- 问题:
在/var/log/qemu/目录下生成qemu-guest-111572.log文件,报错如下:
2018-01-2211:18:55.7377907f8b67509d00 -1auth: unable to find a keyring on /etc/ceph/ceph.client.openstack.keyring,/etc/ceph/ceph.keyring,/etc/ceph/keyring,/etc/ceph/keyring.bin: (2) No such file or directory
-
- auth已经关了,这块还需要,没有深入的看,直接增加一个ceph.client.openstack.keyring文件:
ceph auth list 先查看openstack这个用户的auth keyring
client.openstackkey: AQD4j4FZnChLGRAA1ElxLLZ45HfAQhC0QhKPVw==caps: [mon] allow rcaps: [osd] allowclass-read object_prefix rbd_children, allow rwx pool=openstack-pool-9ecb83e9-fe6c-4519-aeed-6d7646b05aae
- auth已经关了,这块还需要,没有深入的看,直接增加一个ceph.client.openstack.keyring文件:
-
-
- 在计算节点的/etc/ceph/目录下创建ceph.client.openstack.keyring文件,并将上面拿到的openstack的auth key复制进去:
-
不过需要注意: 在keyring文件中,和auth list看到的展示形式不太一样,做如下修改:
-
-
-
[client.openstack] ----- 带上中括号key = AQD4j4FZnChLGRAA1ElxLLZ45HfAQhC0QhKPVw== ----- 将key后面的':'修改为'='caps mon ="allow r"----- 将caps后的':'去掉,将mon的中括号去掉,在mon后面加上'=',给具体的权限'allow r'加上双引号; 下面的其他caps也同理caps osd ="allow class-read object_prefix rbd_children, allow rwx pool=openstack-pool-9ecb83e9-fe6c-4519-aeed-6d7646b05aae"
-
-
-
在/var/log/qemu/目录下生成qemu-guest-111572.log文件,报错如下:
2018-01-2211:18:55.7374817f8b67509d00 -1asok(0x7f8b6aba7ac0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to'/var/run/ceph/guests/ceph-client.openstack.111572.140236769443840.asok': (13) Permission denied
-
-
- 这个是因为/var/run/ceph目录权限有问题,qemu起的这些虚拟机示例,其属主属组都是qemu,但是/var/run/ceph目录的属主属组是ceph:ceph,权限是770.
-
我在这里的解决办法是直接将/var/run/ceph目录的权限改为777:【另外,/var/log/qemu/也最好设置一下权限,设置为777,因为我不太清楚qemu进程的属主属组】
-
-
-
[root@nova10run]# ll | grep cephdrwxrwx---.3ceph ceph60Jan2211:18ceph[root@nova10run]# chmod777ceph -R[root@nova10run]# ll | grep cephdrwxrwxrwx.3ceph ceph60Jan2211:18ceph
-
-
- 通过admin_socket查看rbd相关:
注意:每个qemu进程可能使用多个rbd image。至少有一个,因为现在guest系统盘也使用的rbd image。我们看到的进程号后的那一串就是cookie
[root
@storage04
~]# rbd status 03a1f3cc-
6296
-
4953
-b5bb-38932f2e1cf6_disk -p vms
Watchers:
watcher=
192.168
.
34.106
:
0
/
2681227209
client.
7966888
cookie=
140061172315264
[root
@nova10
guests]# ceph daemon /var/run/ceph/guests/ceph-client.openstack.
119985.139891203856896
.asok help
{
"config diff"
:
"dump diff of current config and default config"
,
"config get"
:
"config get <field>: get the config value"
,
"config set"
:
"config set <field> <val> [<val> ...]: set a config variable"
,
"config show"
:
"dump current config settings"
,
"get_command_descriptions"
:
"list available commands"
,
"git_version"
:
"get git sha1"
,
"help"
:
"list available commands"
,
"log dump"
:
"dump recent log entries to log file"
,
"log flush"
:
"flush log entries to log file"
,
"log reopen"
:
"reopen log file"
,
"objecter_requests"
:
"show in-progress osd requests"
,
"perf dump"
:
"dump perfcounters value"
,
"perf reset"
:
"perf reset <name>: perf reset all or one perfcounter name"
,
"perf schema"
:
"dump perfcounters schema"
,
"rbd cache flush volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e"
:
"flush rbd image volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e
cache",
"rbd cache invalidate
volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e
": "
invalidate rbd
image volumes/volume-3e76f65d-4e9b-48f4-ac94-b27b5fa4b21e cache",
"version"
:
"get ceph version"
}
|
- rbd的log级别怎么调整?
通过每个使用rbd image的进程,产生的admin_socket来设置debug_rbd日志级别:
-
查看当前debug_rbd参数值:
[root@nova10ceph]# ceph daemon /var/run/ceph/guests/ceph-client.openstack.119985.139891203856896.asok config get debug_rbd {"debug_rbd":"0/0"}
-
提高debug_rbd日志级别:
[root@nova10qemu]# ceph daemon /var/run/ceph/guests/ceph-client.openstack.119985.139891203856896.asok config set debug_rbd20/20{"success":""}
-
查看日志:
[root@nova10qemu]# tailf qemu-guest-119985.log2018-01-2216:09:31.5329957f3af3c93d0020librbd::AioImageRequestWQ: aio_write:ictx=0x7f3af584f200, completion=0x7f3af56a3740, off=2147647488,len=1048576, flags=02018-01-2216:09:31.5330247f3af3c93d0020librbd::AioImageRequestWQ: queue: ictx=0x7f3af584f200, req=0x7f3afeb841002018-01-2216:09:31.5330287f3af3c93d0020librbd::ExclusiveLock:0x7f3af5976140is_lock_owner=12018-01-2216:09:31.5330597f3a3fe6070020librbd::AsyncOperation:0x7f3af56a3878start_op2018-01-2216:09:31.5330737f3a3fe6070020librbd::AioImageRequestWQ: process: ictx=0x7f3af584f200, req=0x7f3afeb841002018-01-2216:09:31.5330777f3a3fe6070020librbd::AioImageRequest: aio_write: ictx=0x7f3af584f200, completion=0x7f3af56a37402018-01-2216:09:31.5330927f3a3fe6070020librbd::AioCompletion:0x7f3af56a3740set_request_count: pending=12018-01-2216:09:31.5342697f3a3f25b70020librbd::AioCompletion:0x7f3af56a3740complete_request: cb=1, pending=02018-01-2216:09:31.5342847f3a3f25b70020librbd::AioCompletion:0x7f3af56a3740finalize: r=0, read_buf=0, real_bl=02018-01-2216:09:31.5343017f3a3f25b70020librbd::AsyncOperation:0x7f3af56a3878finish_op
最后
以上就是眼睛大胡萝卜最近收集整理的关于Ceph client上配置RBD log的全部内容,更多相关Ceph内容请搜索靠谱客的其他文章。
本图文内容来源于网友提供,作为学习参考使用,或来自网络收集整理,版权属于原作者所有。
发表评论 取消回复