https://developers.cloudflare.com/r2/platform/pricing/
up to 10,000,000 read operations + $0.36 per 1,000,000 operations
$3.60 free
up to 1,000,000 write operations + $4.50 per 1,000,000 operations
$4.50 free
up to 10 GB-month storage + $0.015 per 1 GB-month of storage
$0.15 free
$15.36 per 1 TB-month
currently "sampled at 10 min but might tune that down to something like 15min or 20min"
https://developers.cloudflare.com/r2/platform/limits/
- max 1000 buckets per account
- max just under 5tb per object
- max just under 5gb per object
unofficial:
- multipart upload parts must all be the same size except the last part! includes copy/copy-range parts
- max 2 or 3 simultaneous multipart part uploads per upload-id
- max 200 simultaneous uploads per bucket
Runtime interface and associated types:
https://github.com/cloudflare/workers-types/blob/master/index.d.ts#L1000
Need to specify a compatibility_date
of 2022-04-18
during script upload to enable the latest runtime bindings.
- bug: no
options.startAfter
equiv to s3 api - bug: any cache-control value will appear as httpMetadata.contentDisposition, not httpMetadata.cacheControl!
- bug: any content-encoding value will appear as httpMetadata.contentDisposition, not httpMetadata.contentEncoding!
- bug: this also affects
writeHttpMetadata
in the same way - bug: passing in a
limit
smaller than the number ofdelimitedPrefixes
that would be returned fails withWe encountered an internal error. Please try again
- bug: when requesting with a
limit
andprefix
, if there are more objects than the limit, but fewerdelimitedPrefixes
,truncated
will be incorrectly set tofalse
, and nocursor
returned, preventing further listing!
- object etags are payload md5
- version is a 128-bit uuid (32 hex chars) with an undefined version
- bug:
if-none-match
, if sent into.get
via one of the Headers as theonlyIf
value, requires an etag value stripped of double quotes (against the spec). After stripping quotes, it will return an obj with nobody
property to indicate a 304 should be returned - bug:
if-match
, if sent into.get
via one of the Headers as theonlyIf
value, requires an etag value stripped of double quotes (against the spec). After stripping quotes, it will return an obj with nobody
property to indicate a 412 should be returned
- md5 option, if provided as a string, must be hex (not b64 like the
content-md5
header)
- takes a single key only, no analog to
DeleteObjects
to delete multiple objects in a single call "plumbing is done, just not exposed in the runtime"
Official docs:
https://developers.cloudflare.com/r2/platform/s3-compatibility/api/
V4 Signing:
- endpoint:
<account-id>.r2.cloudflarestorage.com
- access_key_id: cloudflare api token id (from dev tools when editing or listing)
- secret_key:
hex(sha256(utf8(<api-token-secret-value>)))
i.e.echo -n "<api-token-secret-value>" | sha256sum
- or "Manage R2 API Tokens" in the R2 dashboard
- region:
auto
(or empty string, orus-east-
, which alias toauto
) Any other value will fail
Known unimplemented features that can cause incompatibility:
- No public-read buckets!
- No presigned urls!
- Path-style requests only, no "vhost" style with bucket name as a subdomain
- only supported value is
AES256
- docbug: not listed in official docs
- bug: returns 200, should return 204
- bug: doesn't seem to do anything??
- docbug: not listed in official docs
- bug: R2 does not support the standard xml declaration
<?xml version="1.0" encoding="UTF-8"?>
before the payload - bug: Any parameter values that R2 does not support fail with
Unexpected status 400, code=MalformedXML, message=The XML you provided was not well formed or did not validate against our published schema.
- R2 only supports
AES256
forSSEAlgorithm
andtrue
forBucketKeyEnabled
- docbug: not listed in official docs
- implemented, returns a location constraint of
auto
- docbug: not listed in official docs
- implemented!
- docbug: not listed in the official docs
- bug: when requesting with a
max-keys
andprefix
, if there are more objects than the limit, but fewercommonPrefixes
,IsTruncated
will be incorrectly set tofalse
, and noNextContinuationToken
returned, preventing further listing!
- bug: unbounded requests e.g.
bytes=0-
are not supported - does not support multiple ranges (neither does amazon)
- Transparent compression! like gcp, can upload with
content-encoding: gzip
and cf will decompress if the client doesn't sendaccept-encoding: gzip
- Important: if your object is already encoded in R2, set the
content-encoding
response header and also the non-standard cloudflareencodeBody: 'manual'
property inResponseInit
- does not support conditional PUTs (neither does amazon)
- bug:
Quiet
parameter is parsed, but has no effect (does not prevent deleted keys from appearing in the response)
- newbug: basic existing-object copy fails with 400, code=InvalidRequest, message=The specified copy source is not supported as a byte-range copy source
- throws 500 when uploadId has already been aborted and other situations
- Not implemented! No way to clean up old uploads if you lose the upload-id! "we'll add listing uploads before we start charging"
GetObject
: multiple ranges in a single requestPutObject
: conditional putsPutObject
: ability to set CORS headers- CORS headers for S3 API endpoints, would allow storage management via web app
Arq Mac backup app
❌ Not working
- r2 bug: needs r2 to fix uri encoding of query params
Litestream SQLite replication
✅ Working!
- Add
config.LogLevel = aws.LogLevel(aws.LogDebugWithSigning | aws.LogDebugWithHTTPBody)
in theconfig()
functionreplica_client.go
to debug aws calls - Uses the v1 aws go sdk, not v2
Configuration file that works:
access-key-id: <r2 access key id>
secret-access-key: <r2 secret access key>
dbs:
- path: /path/to/backend.db
replicas:
- type: s3
bucket: <bucket>
path: backend # important not to include leading slash, else no snapshots will be found, possible litestream bug
region: auto
endpoint: https://<account>.r2.cloudflarestorage.com
AWS CLI Official Amazon cli
aws s3api list-objects-v2 --profile <r2-profile-name> --endpoint-url https://<account>.r2.cloudflarestorage.com --region auto --bucket <bucket> --prefix whatever/ --delimiter / --max-keys 500
aws s3api delete-objects --profile <r2-profile-name> --endpoint-url https://<account>.r2.cloudflarestorage.com --region auto --bucket <bucket> --delete "Objects=[{Key=a}],Quiet=true"
(to reproduce quiet output bug)aws s3 sync /path/to/local/dir s3://<bucket>/<path> --profile <r2-profile-name> --endpoint-url https://<account>.r2.cloudflarestorage.com --region auto
- Fails when using multipart uploading: can disable multipart by setting the
multipart_threshold
config to a high value like the max forPutObject
, or can restrict max concurrent requests to stay within the current r2 multipart concurrency restriction of 2.
- Fails when using multipart uploading: can disable multipart by setting the
- You can set
region = auto
in~/.aws/config
for the profile instead of passing--region
into each command
Example ~/.aws/config
:
[profile <r2-profile-name>]
region = auto
s3 =
# default 10
max_concurrent_requests = 2
# default 8MB
multipart_threshold = 50MB
# default 8MB
multipart_chunksize = 50MB
addressing_style = path
Cyberduck Cloud storage browser for Mac and Windows
-
Now requires a custom profile (see below)
-
Can also set a hidden preference to use path-style requests (
defaults write ~/Library/Preferences/ch.sudo.cyberduck.plist s3.bucket.virtualhost.disable true
)- Or use the
S3 (Deprecated path style requests)
profile (search for "path" in Connection Profiles) - But this method alone will currently fail with other errors
- Or use the
-
Uploads above 100mb use multipart uploads and fail with "Transfer incomplete" (can disable multipart by increasing the threshold with a hidden preference, e.g. to 1GB
defaults write ~/Library/Preferences/ch.sudo.cyberduck.plist s3.upload.multipart.threshold 1073741824
)
R2.cyberduckprofile
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>Protocol</key>
<string>s3</string>
<key>Vendor</key>
<string>cloudflare</string>
<key>Scheme</key>
<string>https</string>
<key>Description</key>
<string>Cloudflare R2</string>
<key>Default Hostname</key>
<string>ACCOUNT_ID.r2.cloudflarestorage.com</string>
<key>Hostname Configurable</key>
<true/>
<key>Port Configurable</key>
<false/>
<key>Username Configurable</key>
<true/>
<key>Region</key>
<string>auto</string>
<key>Properties</key>
<array>
<string>s3service.disable-dns-buckets=true</string>
</array>
</dict>
</plist>
Synology Cloud Sync Sync and share files from a Synology NAS
❌ Not working
- "S3 Storage" > "Custom Server URL" > No option for specifying path-style
❌ Not working
- No option for specifying path-style
Synology Hyper Backup Backup from a Synology NAS
❌ Not working
- No option for specifying path-style
✅ Working (reportedly)
- Option for specifying path-style!
- Use the beta build, see this github issue
rclone.conf
[r2]
type = s3
provider = Cloudflare
access_key_id = <r2 access key id>
secret_access_key = <r2 secret access key>
endpoint = https://<account>.r2.cloudflarestorage.com
region = auto
list_url_encode = true
acl = private
Hugo Deploy Upload a local hugo build using the hugo cli
✅ Working
- example url in the deployment target toml:
URL = "s3://<bucket>?region=auto&endpoint=<account>.r2.cloudflarestorage.com&s3ForcePathStyle=true
AWS_PROFILE=<r2-profile-name> hugo deploy
to use r2 credentials from a shared profile- if you use
gzip = true
in your hugo deployment toml, make sure to serve them out usingencodeBody: 'manual'
(see above)
Thanks - hopefully most of these things are now obsolete or officially documented. This was back in the very early days of R2 beta.
I'm glad you got Cyberduck working!