- start a vstart cluster
- created a tenanted user:
bin/radosgw-admin user create --display-name "Ka Boom" --tenant boom --uid ka --access_key ka --secret_key boom
- create a bucket on that tenant
AWS_ACCESS_KEY_ID=ka AWS_SECRET_ACCESS_KEY=boom aws --endpoint-url http://localhost:8000 s3 mb s3://fish
- create a log bucket with no tenant
aws --endpoint-url http://127.0.0.1:8000 s3 mb s3://all-logs
- create a matching log bucket policy file (
policy.json
):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "S3ServerAccessLogsPolicy",
"Effect": "Allow",
"Principal": {
"Service": "logging.s3.amazonaws.com"
},
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::all-logs/logs",
"Condition": {
"ArnLike": {
"aws:SourceArn": "arn:aws:s3::boom:fish"
},
"StringEquals": {
"aws:SourceAccount": "boom$ka"
}
}
}
]
}
- add policy to log bucket:
aws --endpoint-url http://127.0.0.1:8000 s3api put-bucket-policy --bucket all-logs --policy file://policy.json
- create bucket logging rule on bucket:
AWS_ACCESS_KEY_ID=ka AWS_SECRET_ACCESS_KEY=boom aws --endpoint-url http://localhost:8000 s3api put-bucket-logging --bucket fish --bucket-logging-status '{"LoggingEnabled": {"TargetBucket": ":all-logs", "TargetPrefix": "logs", "LoggingType": "Journal"}}'
note that the log bucket has to be explicitly set with empty tenant:
:all-logs
- verify that flushing work from
radosgw-admin
(in both ways):
bin/radosgw-admin bucket logging flush --bucket boom/fish
bin/radosgw-admin bucket logging flush --bucket fish --tenant boom --uid ka
Thanks for the helpful guide, @yuvalif
It simplifies the bucket logging and flush process.
Would it be useful to add a quick aws s3 ls verification step after the radosgw-admin bucket logging flush command to confirm logs are written to the all-logs bucket? It could help users validate the flush outcome.
Thanks!