Skip to content

Instantly share code, notes, and snippets.

@johnspurlock-skymethod
Last active March 13, 2024 17:32
Show Gist options
  • Save johnspurlock-skymethod/6027c81953f81aa535d889d86a1061ba to your computer and use it in GitHub Desktop.
Save johnspurlock-skymethod/6027c81953f81aa535d889d86a1061ba to your computer and use it in GitHub Desktop.
Unofficial R2 notes

Unofficial R2 notes

Cost

https://developers.cloudflare.com/r2/platform/pricing/

up to 10,000,000 read operations + $0.36 per 1,000,000 operations
  $3.60 free
up to 1,000,000 write operations + $4.50 per 1,000,000 operations
  $4.50 free
up to 10 GB-month storage + $0.015 per 1 GB-month of storage
  $0.15 free
  $15.36 per 1 TB-month
  currently "sampled at 10 min but might tune that down to something like 15min or 20min"

Limits

https://developers.cloudflare.com/r2/platform/limits/

  • max 1000 buckets per account
  • max just under 5tb per object
  • max just under 5gb per object

unofficial:

  • multipart upload parts must all be the same size except the last part! includes copy/copy-range parts
  • max 2 or 3 simultaneous multipart part uploads per upload-id
  • max 200 simultaneous uploads per bucket

Worker binding interface

Runtime interface and associated types:
https://github.com/cloudflare/workers-types/blob/master/index.d.ts#L1000

Need to specify a compatibility_date of 2022-04-18 during script upload to enable the latest runtime bindings.

.list

  • bug: no options.startAfter equiv to s3 api
  • bug: any cache-control value will appear as httpMetadata.contentDisposition, not httpMetadata.cacheControl!
  • bug: any content-encoding value will appear as httpMetadata.contentDisposition, not httpMetadata.contentEncoding!
  • bug: this also affects writeHttpMetadata in the same way
  • bug: passing in a limit smaller than the number of delimitedPrefixes that would be returned fails with We encountered an internal error. Please try again
  • bug: when requesting with a limit and prefix, if there are more objects than the limit, but fewer delimitedPrefixes, truncated will be incorrectly set to false, and no cursor returned, preventing further listing!

.get/head

  • object etags are payload md5
  • version is a 128-bit uuid (32 hex chars) with an undefined version
  • bug: if-none-match, if sent into .get via one of the Headers as the onlyIf value, requires an etag value stripped of double quotes (against the spec). After stripping quotes, it will return an obj with no body property to indicate a 304 should be returned
  • bug: if-match, if sent into .get via one of the Headers as the onlyIf value, requires an etag value stripped of double quotes (against the spec). After stripping quotes, it will return an obj with no body property to indicate a 412 should be returned

.put

  • md5 option, if provided as a string, must be hex (not b64 like the content-md5 header)

.delete

  • takes a single key only, no analog to DeleteObjects to delete multiple objects in a single call "plumbing is done, just not exposed in the runtime"

S3 compatible API

Official docs:
https://developers.cloudflare.com/r2/platform/s3-compatibility/api/

V4 Signing:

  • endpoint: <account-id>.r2.cloudflarestorage.com
  • access_key_id: cloudflare api token id (from dev tools when editing or listing)
  • secret_key: hex(sha256(utf8(<api-token-secret-value>))) i.e. echo -n "<api-token-secret-value>" | sha256sum
    • or "Manage R2 API Tokens" in the R2 dashboard
  • region: auto (or empty string, or us-east-, which alias to auto) Any other value will fail

Known unimplemented features that can cause incompatibility:

  • No public-read buckets!
  • No presigned urls!
  • Path-style requests only, no "vhost" style with bucket name as a subdomain

ListBuckets

HeadBucket

CreateBucket

DeleteBucket

GetBucketEncryption

  • only supported value is AES256
  • docbug: not listed in official docs

DeleteBucketEncryption

  • bug: returns 200, should return 204
  • bug: doesn't seem to do anything??
  • docbug: not listed in official docs

PutBucketEncryption

  • bug: R2 does not support the standard xml declaration <?xml version="1.0" encoding="UTF-8"?> before the payload
  • bug: Any parameter values that R2 does not support fail with Unexpected status 400, code=MalformedXML, message=The XML you provided was not well formed or did not validate against our published schema.
  • R2 only supports AES256 for SSEAlgorithm and true for BucketKeyEnabled
  • docbug: not listed in official docs

GetBucketLocation

  • implemented, returns a location constraint of auto
  • docbug: not listed in official docs

HeadObject

ListObjects

  • implemented!
  • docbug: not listed in the official docs

ListObjectsV2

  • bug: when requesting with a max-keys and prefix, if there are more objects than the limit, but fewer commonPrefixes, IsTruncated will be incorrectly set to false, and no NextContinuationToken returned, preventing further listing!

GetObject

  • bug: unbounded requests e.g. bytes=0- are not supported
  • does not support multiple ranges (neither does amazon)
  • Transparent compression! like gcp, can upload with content-encoding: gzip and cf will decompress if the client doesn't send accept-encoding: gzip
  • Important: if your object is already encoded in R2, set the content-encoding response header and also the non-standard cloudflare encodeBody: 'manual' property in ResponseInit

PutObject

  • does not support conditional PUTs (neither does amazon)

DeleteObject

DeleteObjects

  • bug: Quiet parameter is parsed, but has no effect (does not prevent deleted keys from appearing in the response)

CopyObject

CreateMultipartUpload

UploadPart

UploadPartCopy

  • newbug: basic existing-object copy fails with 400, code=InvalidRequest, message=The specified copy source is not supported as a byte-range copy source

CompleteMultipartUpload

AbortMultipartUpload

  • throws 500 when uploadId has already been aborted and other situations

ListMultipartUploads

  • Not implemented! No way to clean up old uploads if you lose the upload-id! "we'll add listing uploads before we start charging"

Wishlist

  • GetObject: multiple ranges in a single request
  • PutObject: conditional puts
  • PutObject: ability to set CORS headers
  • CORS headers for S3 API endpoints, would allow storage management via web app

App Compatibility

Arq Mac backup app

❌ Not working

  • r2 bug: needs r2 to fix uri encoding of query params

Litestream SQLite replication

✅ Working!

  • Add config.LogLevel = aws.LogLevel(aws.LogDebugWithSigning | aws.LogDebugWithHTTPBody) in the config() function replica_client.go to debug aws calls
  • Uses the v1 aws go sdk, not v2

Configuration file that works:

access-key-id: <r2 access key id>
secret-access-key: <r2 secret access key>

dbs:
  - path: /path/to/backend.db
    replicas:
      - type: s3
        bucket: <bucket>
        path: backend  # important not to include leading slash, else no snapshots will be found, possible litestream bug  
        region: auto
        endpoint: https://<account>.r2.cloudflarestorage.com

AWS CLI Official Amazon cli

⚠️ Kind-of working

  • aws s3api list-objects-v2 --profile <r2-profile-name> --endpoint-url https://<account>.r2.cloudflarestorage.com --region auto --bucket <bucket> --prefix whatever/ --delimiter / --max-keys 500
  • aws s3api delete-objects --profile <r2-profile-name> --endpoint-url https://<account>.r2.cloudflarestorage.com --region auto --bucket <bucket> --delete "Objects=[{Key=a}],Quiet=true" (to reproduce quiet output bug)
  • aws s3 sync /path/to/local/dir s3://<bucket>/<path> --profile <r2-profile-name> --endpoint-url https://<account>.r2.cloudflarestorage.com --region auto
    • Fails when using multipart uploading: can disable multipart by setting the multipart_threshold config to a high value like the max for PutObject, or can restrict max concurrent requests to stay within the current r2 multipart concurrency restriction of 2.
  • You can set region = auto in ~/.aws/config for the profile instead of passing --region into each command

Example ~/.aws/config:

[profile <r2-profile-name>]
region = auto
s3 =
  # default 10
  max_concurrent_requests = 2

  # default 8MB
  multipart_threshold = 50MB

  # default 8MB
  multipart_chunksize = 50MB

  addressing_style = path

Cyberduck Cloud storage browser for Mac and Windows

⚠️ Kind-of working

  • Now requires a custom profile (see below)

  • Can also set a hidden preference to use path-style requests (defaults write ~/Library/Preferences/ch.sudo.cyberduck.plist s3.bucket.virtualhost.disable true)

    • Or use the S3 (Deprecated path style requests) profile (search for "path" in Connection Profiles)
    • But this method alone will currently fail with other errors
  • Uploads above 100mb use multipart uploads and fail with "Transfer incomplete" (can disable multipart by increasing the threshold with a hidden preference, e.g. to 1GB defaults write ~/Library/Preferences/ch.sudo.cyberduck.plist s3.upload.multipart.threshold 1073741824)

R2.cyberduckprofile

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
    <dict>
        <key>Protocol</key>
        <string>s3</string>
        <key>Vendor</key>
        <string>cloudflare</string>
        <key>Scheme</key>
        <string>https</string>
        <key>Description</key>
        <string>Cloudflare R2</string>
        <key>Default Hostname</key>
        <string>ACCOUNT_ID.r2.cloudflarestorage.com</string>
        <key>Hostname Configurable</key>
        <true/>
        <key>Port Configurable</key>
        <false/>
        <key>Username Configurable</key>
        <true/>
        <key>Region</key>
        <string>auto</string>
        <key>Properties</key>
        <array>
            <string>s3service.disable-dns-buckets=true</string>
        </array>
    </dict>
</plist>

Synology Cloud Sync Sync and share files from a Synology NAS

Version 2.5.1-1230 for DSM 6.x

❌ Not working

  • "S3 Storage" > "Custom Server URL" > No option for specifying path-style

Version ??? for DSM 7.x

❌ Not working

  • No option for specifying path-style

Synology Hyper Backup Backup from a Synology NAS

Version 2.2.9-1520 for DSM 6.x

❌ Not working

  • No option for specifying path-style

Version 3.0.2 for DSM 7.x

✅ Working (reportedly)

  • Option for specifying path-style!

⚠️ Kind-of working (requires a beta build, will be released in v1.59)

rclone.conf

[r2]
type = s3
provider = Cloudflare
access_key_id = <r2 access key id>
secret_access_key = <r2 secret access key>
endpoint = https://<account>.r2.cloudflarestorage.com
region = auto
list_url_encode = true
acl = private

Hugo Deploy Upload a local hugo build using the hugo cli

✅ Working

  • example url in the deployment target toml: URL = "s3://<bucket>?region=auto&endpoint=<account>.r2.cloudflarestorage.com&s3ForcePathStyle=true
  • AWS_PROFILE=<r2-profile-name> hugo deploy to use r2 credentials from a shared profile
  • if you use gzip = true in your hugo deployment toml, make sure to serve them out using encodeBody: 'manual' (see above)
@kopf
Copy link

kopf commented Jan 18, 2024

Hi there,

stumbled across this while trying to debug the problems colleagues were having when using Cyberduck to upload to R2.

Thanks for this, and for making it public!

@johnspurlock-skymethod
Copy link
Author

Thanks - hopefully most of these things are now obsolete or officially documented. This was back in the very early days of R2 beta.

I'm glad you got Cyberduck working!

@mauricius
Copy link

Just wanted to mention that the error The specified copy source is not supported as a byte-range copy source for the UploadPartCopy operation is likely caused by the fact that R2 doesn't support the x-amz-copy-source header, which is required in case of S3. As it stands, I think in R2 this operation is basically useless.

@johnspurlock-skymethod
Copy link
Author

Just wanted to mention that the error The specified copy source is not supported as a byte-range copy source for the UploadPartCopy operation is likely caused by the fact that R2 doesn't support the x-amz-copy-source header, which is required in case of S3. As it stands, I think in R2 this operation is basically useless.

Agreed - I haven't rechecked this in a while, but supporting x-amz-copy-source-range without x-amz-copy-source does not make sense

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment