- add "flush" REST API call to fix the issue of lazy-commit. use
POST /<bucket name>/?logging
as the command - add admin command to get bucket logging info:
radosgw-admin bucket logging get
- handle copy correctly:
- in "Journal" mode, we should just see the "PUT" of the new object (existing behavior)
- in "Standard" mode, we should (one of the 2):
- document our difference from the spec, instead of
REST.COPY.OBJECT_GET
+REST.PUT.OBJECT
we would have a singleREST.COPY.OBJECT_PUT
with details on the target object - add
REST.COPY.OBJECT_GET
artificially to indicate the source object
- document our difference from the spec, instead of
- object size parameter is
- completion records ("Journal2"?)
- filter by tags/attributes
- add test: versioning, policy
- cross tenant/account logging
- add "region" (zone group) to "partitioned" key format
- test log bucket lifecycle configuration
- add bucket logging s3tests to teuthology
- currently we assume the owner of the comitted object is the logging bucket owner.
to comply with AWS log delivery permissions we should support granting write permissions to a different user which will be considered as the one that writes the logs.
this will also reguire adding
TargetGrants
section to thePutBucketLogging
REST API - get the "resource" (= what comes after the "?") part from the request URI.
currently our
op_name
is used, but correct strings could be taken from here: ceph/ceph#17754. implemented: https://github.com/ceph/ceph/pull/59808/commits/92fe61db4583a6fa76945f37e6d2e51cf8c62cfa - implement other "TODOs" of the "standard" log format
- referer
- user agent
- Signature Version (SigV2 or SigV4)
- SSL cipher. e.g. "ECDHE-RSA-AES128-GCM-SHA256"
- Auth type. e.g. "AuthHeader"
- TLS version. e.g. "TLSv1.2" or "TLSv1.3"
- lifecycle support - create logs for lifecycle events
- add owner "display name"
- add compatibility doc
- support server side encryption for log bucket
- go over the max RADOS object size another tail object, and commit according to the regular logic. this is more complex to implement (smilar to multipart upload), however, the number of log objects are going to be accurate according to the logging configuration
- "batch mode" - or any other mode where we do not guarantee that every operations result with a command
- decide how/if to timplement
EventTime
in date source - according to AWS doc a COPY operation shoud lresult in 2 records: GET + PUT. in our case we create HEAD+PUT records
- implement custom access log information
- add another record type: "Custom" that allows the user to define which fields to record, based on a set of possible fields
- bucket notifications on bucket logging
notes from AWS doc:
Your destination bucket should not have server access logging enabled. You can have logs delivered to any bucket that you own that is in the same Region as the source bucket, including the source bucket itself. However, delivering logs to the source bucket will cause an infinite loop of logs and is not recommended. For simpler log management, we recommend that you save access logs in a different bucket.
it should not be allowed to set bucket logging configuration on a log bucket or to define a log bucket that has bucket logging configuration on it this also covers not allowing delivering logs to the source bucket.
S3 buckets that have S3 Object Lock enabled can't be used as destination buckets for server access logs. Your destination bucket must not have a default retention period configuration.
The destination bucket must not have Requester Pays enabled.
N/A
You can use default bucket encryption on the destination bucket only if you use server-side encryption with Amazon S3 managed keys (SSE-S3), which uses the 256-bit Advanced Encryption Standard (AES-256). Default server-side encryption with AWS Key Management Service (AWS KMS) keys (SSE-KMS) is not supported.
TODO