Skip to content

Instantly share code, notes, and snippets.

@osy
Last active January 26, 2024 18:18
Show Gist options
  • Save osy/30c5c96d7575efd1d2a2db5e3def0815 to your computer and use it in GitHub Desktop.
Save osy/30c5c96d7575efd1d2a2db5e3def0815 to your computer and use it in GitHub Desktop.
Local caching for GitHub Actions self hosted runner using Squid Proxy

One of the biggest issues with using a self hosted GitHub runner is that actions that require downloading large amounts of data will bottleneck at the network. actions/cache does not support locally caching objects and artifacts stored on GitHub's servers will require a lot of bandwidth to fetch on every job. We can, however, set up a content proxy using Squid with SSL bumping to locally cache requests from jobs.

Patching Squid

A major challenge is that actions/cache uses Azure storage APIs which makes HTTP range requests. While Squid supports range requests, it is not good at caching them. There is an option, range_offset_limit none which, according to the documentation:

A size of 'none' causes Squid to always fetch the object from the beginning so it may cache the result. (2.0 style)

However, after extensive debugging, I discovered that the feature does not work if the client closes the connection. When range_offset_limit is set, Squid will make a full request to the server, but once it fetches the requested range, it will immediately return it to the client, and the client will (usually) close the connection. Once the client side connection is closed, Squid will close the server connection as well and discard any incomplete data from the cache.

A patch is provided for Squid 5.6 which changes the behaviour when range_offset_limit is used. With the patch applied, the data will not be immediately returned when the requested range is read by Squid. Instead, it will stall until the entire response is received from the origin server. This will force the client to keep the connection open while the data is being fetched. Additionally, the patch adds support for the x-ms-range headers that Azure uses and adds a workaround to Azure returning Content-Length: 0 on cache refresh responses.

For macOS, a Homebrew formula is also provided which you can install directly with brew install squid.rb.

Store ID Helper Program

Both GitHub releases and actions/cache Azure objects are fetched with GET requests with additional authentication parameters. (I'm sure GitHub artifacts does this as well, but I don't currently use them so I don't have the URL format captured.) A helper program is needed to map the URL to a store ID so that Squid can see the same request with different parameters as the same object. The helper program also does some GNOME mirror mapping that we need for our builds that can be removed if you only need to cache GitHub objects.

Squid Configuration

The configuration is largely inspired from this blog post which details setting up SSL bump for caching large downloads. An example config is attached, but here are the important parts:

SSL Bump

The following sets up SSL bump with settings. Certificate generation and setup is detailed here. We exclude pipelines.actions.githubusercontent.com because that's used by Actions communications and there is no need to introduce extra latency.

http_port 3128 tcpkeepalive=60,30,3 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=20MB tls-cert=/opt/homebrew/etc/squid/squid-self-signed.crt tls-key=/opt/homebrew/etc/squid/squid-self-signed.key cipher=HIGH:MEDIUM:!LOW:!RC4:!SEED:!IDEA:!3DES:!MD5:!EXP:!PSK:!DSS options=NO_TLSv1,NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE tls-dh=prime256v1:/opt/homebrew/etc/squid/squid-self-signed_dhparam.pem

acl step1 at_step SslBump1
acl github_pipeline ssl::server_name pipelines.actions.githubusercontent.com

sslcrtd_program /opt/homebrew/opt/squid/libexec/security_file_certgen -s /opt/homebrew/var/logs/ssl_db -M 20MB
sslcrtd_children 5
ssl_bump peek step1
ssl_bump splice github_pipeline
ssl_bump stare all
sslproxy_cert_error deny all

Collapsed Forwarding

If one request is currently being cached and another request is made to the same object, we want to stall the second request until the first one finished. Usually, this isn't good for performance, but when we are exclusively caching large downloads, this will reduce a lot of redundant downloads.

collapsed_forwarding on

FD Limit

On macOS, the default FD limit (256) is too low.

max_filedescriptors 4096

Cache settings

As detailed above, this allows us to recognize different GET requests to be to the same object. We store the program in /opt/homebrew/etc/squid/github_store_id_helper.py.

store_id_program /opt/homebrew/etc/squid/github_store_id_helper.py
store_id_children 40 startup=10 idle=5 concurrency=0

We limit each object in the cache to be 1000 MB and the total cache size to be 20000 MB, but this can be adjusted.

maximum_object_size 1000 MB
cache_dir aufs /opt/homebrew/var/cache/squid 20000 16 256

Set up refresh patterns for Azure and GitHub releases (actions artifacts TBD). The overrides are there to ensure these are cached regardless of the HTTP response. This is fine because the objects have unique IDs in the URL.

refresh_pattern y2oiacprodeus2file6.blob.core.windows.net\/.*   1440    20% 10080   ignore-reload ignore-no-store ignore-private override-expire
refresh_pattern objects.githubusercontent.com\/github-production-release-asset-.*   1440 20% 10080  ignore-reload ignore-no-store ignore-private override-expire

As detailed above, this requires a patched Squid to work. We want range downloads to cache the entire object.

acl azure_storage dstdomain .blob.core.windows.net
range_offset_limit -1 azure_storage

Runner setup

Modify your .env file in the runner directory and add the proxy settings as well as the SSL bump certificate.

http_proxy=http://127.0.0.1:3128
https_proxy=http://127.0.0.1:3128
NODE_EXTRA_CA_CERTS=/opt/homebrew/etc/squid/squid-self-signed.pem
#!/usr/bin/env python3
import re
import sys
STRIP_PARAMS = [
'https://objects.githubusercontent.com/github-production-release-asset-',
'https://y2oiacprodeus2file6.blob.core.windows.net/',
]
GNOME_PROJECTS = [
'glib',
'json-glib',
'libsoup',
'phodav',
]
def stripParams(url):
idx = url.find('?')
if idx < 0:
return None
else:
return url[0:idx]
def parseGnomeMirror(url, project):
match = re.match(r'^https?:\/\/.*\/' + project + r'\/([\d\.]+)\/(' + project + r'-[\d\.]+\.tar\.\w+)', url)
if not match:
return None
else:
version = match.group(1)
file = match.group(2)
return f'https://download.gnome.org/sources/{project}/{version}/{file}'
def parseUrl(url, method=None):
if method != None and method != 'GET' and method != 'HEAD':
return None
for candidate in STRIP_PARAMS:
if url.startswith(candidate):
return stripParams(url)
for project in GNOME_PROJECTS:
storeID = parseGnomeMirror(url, project)
if storeID != None:
return storeID
def parseLine(line):
parts = line.split(' ')
channelID = None
method = None
if len(parts) > 1 and '://' in parts[1]:
channelID = parts[0]
url = parts[1]
if len(parts) > 3:
method = parts[4]
else:
url = parts[0]
if len(parts) > 2:
method = parts[3]
storeID = parseUrl(url, method)
return (channelID, storeID)
def main():
for line in sys.stdin:
line = line.strip()
try:
(channelID, storeID) = parseLine(line)
if storeID == None:
result = "ERR"
else:
result = "OK store-id=" + storeID
if channelID != None:
result = channelID + " " + result
except:
result = 'BH'
sys.stdout.write(result + '\n')
sys.stdout.flush()
if __name__ == '__main__':
sys.exit(main())
diff -Naur a/src/HttpHeader.cc b/src/HttpHeader.cc
--- a/src/HttpHeader.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/HttpHeader.cc 2022-08-12 20:20:34.000000000 -0700
@@ -290,7 +290,8 @@
(id == Http::HdrType::WARNING) ||
// TODO: Consider updating Vary headers after comparing the magnitude of
// the required changes (and/or cache losses) with compliance gains.
- (id == Http::HdrType::VARY);
+ (id == Http::HdrType::VARY) ||
+ (id == Http::HdrType::CONTENT_LENGTH);
}
void
@@ -1274,7 +1275,8 @@
* hopefully no clients send mismatched headers! */
if ((e = findEntry(Http::HdrType::RANGE)) ||
- (e = findEntry(Http::HdrType::REQUEST_RANGE))) {
+ (e = findEntry(Http::HdrType::REQUEST_RANGE)) ||
+ (e = findEntry(Http::HdrType::X_MS_RANGE))) {
r = HttpHdrRange::ParseCreate(&e->value);
httpHeaderNoteParsedEntry(e->id, e->value, !r);
}
diff -Naur a/src/client_side_reply.cc b/src/client_side_reply.cc
--- a/src/client_side_reply.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/client_side_reply.cc 2022-08-12 16:08:02.000000000 -0700
@@ -1942,8 +1942,10 @@
clientReplyContext::pushStreamData(StoreIOBuffer const &result, char *source)
{
StoreIOBuffer localTempBuffer;
+ const int64_t expectedBodySize =
+ http->storeEntry()->mem().baseReply().bodySize(http->request->method);
- if (result.length == 0) {
+ if (result.length == 0 && result.offset - headers_sz != expectedBodySize) {
debugs(88, 5, "clientReplyContext::pushStreamData: marking request as complete due to 0 length store result");
flags.complete = 1;
}
diff -Naur a/src/client_side_request.cc b/src/client_side_request.cc
--- a/src/client_side_request.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/client_side_request.cc 2022-08-12 15:42:43.000000000 -0700
@@ -1098,6 +1098,7 @@
else {
req_hdr->delById(Http::HdrType::RANGE);
req_hdr->delById(Http::HdrType::REQUEST_RANGE);
+ req_hdr->delById(Http::HdrType::X_MS_RANGE);
request->ignoreRange("neither HEAD nor GET");
}
diff -Naur a/src/http/RegisteredHeaders.h b/src/http/RegisteredHeaders.h
--- a/src/http/RegisteredHeaders.h 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/RegisteredHeaders.h 2022-08-11 10:59:26.000000000 -0700
@@ -111,6 +111,7 @@
X_SQUID_ERROR, /**< Squid custom header on generated error responses */
HDR_X_ACCELERATOR_VARY, /**< obsolete Squid custom header. */
X_NEXT_SERVICES, /**< Squid custom ICAP header */
+ X_MS_RANGE, /**< Used by Azure clients */
SURROGATE_CAPABILITY, /**< Edge Side Includes (ESI) header */
SURROGATE_CONTROL, /**< Edge Side Includes (ESI) header */
FRONT_END_HTTPS, /**< MS Exchange custom header we may have to add */
diff -Naur a/src/http/RegisteredHeadersHash.gperf b/src/http/RegisteredHeadersHash.gperf
--- a/src/http/RegisteredHeadersHash.gperf 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/RegisteredHeadersHash.gperf 2022-08-11 11:00:16.000000000 -0700
@@ -102,6 +102,7 @@
X-Squid-Error, Http::HdrType::X_SQUID_ERROR, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader
X-Accelerator-Vary, Http::HdrType::HDR_X_ACCELERATOR_VARY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader
X-Next-Services, Http::HdrType::X_NEXT_SERVICES, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader
+x-ms-range, Http::HdrType::X_MS_RANGE, Http::HdrFieldType::ftPRange, HdrKind::None
Surrogate-Capability, Http::HdrType::SURROGATE_CAPABILITY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader
Surrogate-Control, Http::HdrType::SURROGATE_CONTROL, Http::HdrFieldType::ftPSc, HdrKind::ListHeader|HdrKind::ReplyHeader
Front-End-Https, Http::HdrType::FRONT_END_HTTPS, Http::HdrFieldType::ftStr, HdrKind::None
diff -Naur a/src/http/Stream.cc b/src/http/Stream.cc
--- a/src/http/Stream.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/Stream.cc 2022-08-12 16:06:20.000000000 -0700
@@ -82,7 +82,9 @@
switch (socketState()) {
case STREAM_NONE:
- pullData();
+ if (!needsStallUntilEnd()) {
+ pullData();
+ }
break;
case STREAM_COMPLETE: {
@@ -128,6 +130,32 @@
}
bool
+Http::Stream::needsStallUntilEnd()
+{
+ const StoreEntry *entry = http->storeEntry();
+ /* ignore if we don't have a range or reply or content length or entry */
+ if (!http->request->range || !reply || !reply->content_length || !entry) {
+ return false;
+ }
+
+ int64_t roffLimit = http->request->getRangeOffsetLimit();
+ debugs(33, 5, reply << " has range limit " << roffLimit);
+
+ if (reply->content_length + reply->hdr_sz == entry->objectLen() ||
+ http->request->range->offsetLimitExceeded(roffLimit)) {
+ debugs(33, 5, reply << " unstalled from sending response");
+ return false;
+ }
+
+ StoreIOBuffer readBuffer;
+ readBuffer.offset = reply->content_length;
+ debugs(33, 5, reply << " stalling until we recieved all data");
+ clientStreamRead(getTail(), http, readBuffer);
+
+ return true;
+}
+
+bool
Http::Stream::multipartRangeRequest() const
{
return http->multipartRangeRequest();
diff -Naur a/src/http/Stream.h b/src/http/Stream.h
--- a/src/http/Stream.h 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/Stream.h 2022-08-12 12:34:56.000000000 -0700
@@ -90,6 +90,9 @@
/// get more data to send
void pullData();
+ /// handles when client needs a partial response and we cache the whole thing
+ bool needsStallUntilEnd();
+
/// \return true if the HTTP request is for multiple ranges
bool multipartRangeRequest() const;
diff -Naur a/src/http.cc b/src/http.cc
--- a/src/http.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http.cc 2022-08-11 10:58:53.000000000 -0700
@@ -2251,6 +2251,8 @@
case Http::HdrType::IF_RANGE:
case Http::HdrType::REQUEST_RANGE:
+
+ case Http::HdrType::X_MS_RANGE:
/** \par Range:, If-Range:, Request-Range:
* Only pass if we accept ranges */
if (!we_do_ranges)
#
# Localhost caching proxy
#
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
acl intermediate_fetching transaction_initiator certificate-fetching
http_access allow intermediate_fetching
#
# Recommended minimum Access Permission configuration:
#
# Deny requests to certain unsafe ports
http_access deny !Safe_ports
# Deny CONNECT to other than secure SSL ports
http_access deny CONNECT !SSL_ports
# Only allow cachemgr access from localhost
http_access allow localhost manager
http_access deny manager
# We strongly recommend the following be uncommented to protect innocent
# web applications running on the proxy server who think the only
# one who can access services on "localhost" is a local user
http_access deny to_localhost
# Example rule allowing access from your local networks.
# Adapt localnet in the ACL section to list your (internal) IP networks
# from where browsing should be allowed
#http_access allow localnet
http_access allow localhost
# And finally deny all other access to this proxy
http_access deny all
# Squid normally listens to port 3128
http_port 3128 tcpkeepalive=60,30,3 ssl-bump generate-host-certificates=on dynamic_cert_mem_cache_size=20MB tls-cert=/opt/homebrew/etc/squid/squid-self-signed.crt tls-key=/opt/homebrew/etc/squid/squid-self-signed.key cipher=HIGH:MEDIUM:!LOW:!RC4:!SEED:!IDEA:!3DES:!MD5:!EXP:!PSK:!DSS options=NO_TLSv1,NO_SSLv3,SINGLE_DH_USE,SINGLE_ECDH_USE tls-dh=prime256v1:/opt/homebrew/etc/squid/squid-self-signed_dhparam.pem
acl step1 at_step SslBump1
acl github_pipeline ssl::server_name pipelines.actions.githubusercontent.com
sslcrtd_program /opt/homebrew/opt/squid/libexec/security_file_certgen -s /opt/homebrew/var/logs/ssl_db -M 20MB
sslcrtd_children 5
ssl_bump peek step1
ssl_bump splice github_pipeline
ssl_bump stare all
sslproxy_cert_error deny all
collapsed_forwarding on
store_id_program /opt/homebrew/etc/squid/github_store_id_helper.py
store_id_children 40 startup=10 idle=5 concurrency=0
maximum_object_size 1000 MB
# Uncomment and adjust the following to add a disk cache directory.
cache_dir aufs /opt/homebrew/var/cache/squid 100000 16 256
# Leave coredumps in the first cache dir
coredump_dir /opt/homebrew/var/cache/squid
acl azure_storage dstdomain .blob.core.windows.net
range_offset_limit -1 azure_storage
# Increase the FD limit
max_filedescriptors 4096
#
# Add any of your own refresh_pattern entries above these.
#
refresh_pattern .(gz|xz|bz2|tar|zip) 1440 20% 10080 ignore-reload ignore-no-store ignore-private override-expire
refresh_pattern storage.googleapis.com\/.* 1440 20% 10080 ignore-reload ignore-no-store ignore-private override-expire
refresh_pattern y2oiacprodeus2file6.blob.core.windows.net\/.* 1440 20% 10080 ignore-reload ignore-no-store ignore-private override-expire
refresh_pattern objects.githubusercontent.com\/github-production-release-asset-.* 1440 20% 10080 ignore-reload ignore-no-store ignore-private override-expire
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
#refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern . 0 20% 4320
class Squid < Formula
desc "Advanced proxy caching server for HTTP, HTTPS, FTP, and Gopher"
homepage "http://www.squid-cache.org/"
url "http://www.squid-cache.org/Versions/v5/squid-5.6.tar.xz"
sha256 "38d27338a347597ce0e93d0c3be6e5f66b6750417c474ca87ee0d61bb6d148db"
license "GPL-2.0-or-later"
livecheck do
url "http://www.squid-cache.org/Versions/v5/"
regex(/href=.*?squid[._-]v?(\d+(?:\.\d+)+)-RELEASENOTES\.html/i)
end
head do
url "lp:squid", using: :bzr
depends_on "autoconf" => :build
depends_on "automake" => :build
depends_on "libtool" => :build
end
depends_on "[email protected]"
def install
# https://stackoverflow.com/questions/20910109/building-squid-cache-on-os-x-mavericks
ENV.append "LDFLAGS", "-lresolv"
# For --disable-eui, see:
# http://www.squid-cache.org/mail-archive/squid-users/201304/0040.html
args = %W[
--disable-debug
--disable-dependency-tracking
--prefix=#{prefix}
--localstatedir=#{var}
--sysconfdir=#{etc}
--enable-ssl
--enable-ssl-crtd
--disable-eui
--enable-pf-transparent
--with-included-ltdl
--with-openssl
--enable-delay-pools
--enable-disk-io=yes
--enable-removal-policies=yes
--enable-storeio=yes
--disable-strict-error-checking
]
system "./bootstrap.sh" if build.head?
system "./configure", *args
system "make", "install"
end
service do
run [opt_sbin/"squid", "-N", "-d 1"]
keep_alive true
working_dir var
end
test do
assert_match version.to_s, shell_output("#{sbin}/squid -v")
pid = fork do
exec "#{sbin}/squid"
end
sleep 2
begin
system "#{sbin}/squid", "-k", "check"
ensure
exec "#{sbin}/squid -k interrupt"
Process.wait(pid)
end
end
patch :DATA
end
__END__
diff -Naur a/src/HttpHeader.cc b/src/HttpHeader.cc
--- a/src/HttpHeader.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/HttpHeader.cc 2022-08-12 20:20:34.000000000 -0700
@@ -290,7 +290,8 @@
(id == Http::HdrType::WARNING) ||
// TODO: Consider updating Vary headers after comparing the magnitude of
// the required changes (and/or cache losses) with compliance gains.
- (id == Http::HdrType::VARY);
+ (id == Http::HdrType::VARY) ||
+ (id == Http::HdrType::CONTENT_LENGTH);
}
void
@@ -1274,7 +1275,8 @@
* hopefully no clients send mismatched headers! */
if ((e = findEntry(Http::HdrType::RANGE)) ||
- (e = findEntry(Http::HdrType::REQUEST_RANGE))) {
+ (e = findEntry(Http::HdrType::REQUEST_RANGE)) ||
+ (e = findEntry(Http::HdrType::X_MS_RANGE))) {
r = HttpHdrRange::ParseCreate(&e->value);
httpHeaderNoteParsedEntry(e->id, e->value, !r);
}
diff -Naur a/src/client_side_reply.cc b/src/client_side_reply.cc
--- a/src/client_side_reply.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/client_side_reply.cc 2022-08-12 16:08:02.000000000 -0700
@@ -1942,8 +1942,10 @@
clientReplyContext::pushStreamData(StoreIOBuffer const &result, char *source)
{
StoreIOBuffer localTempBuffer;
+ const int64_t expectedBodySize =
+ http->storeEntry()->mem().baseReply().bodySize(http->request->method);
- if (result.length == 0) {
+ if (result.length == 0 && result.offset - headers_sz != expectedBodySize) {
debugs(88, 5, "clientReplyContext::pushStreamData: marking request as complete due to 0 length store result");
flags.complete = 1;
}
diff -Naur a/src/client_side_request.cc b/src/client_side_request.cc
--- a/src/client_side_request.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/client_side_request.cc 2022-08-12 15:42:43.000000000 -0700
@@ -1098,6 +1098,7 @@
else {
req_hdr->delById(Http::HdrType::RANGE);
req_hdr->delById(Http::HdrType::REQUEST_RANGE);
+ req_hdr->delById(Http::HdrType::X_MS_RANGE);
request->ignoreRange("neither HEAD nor GET");
}
diff -Naur a/src/http/RegisteredHeaders.h b/src/http/RegisteredHeaders.h
--- a/src/http/RegisteredHeaders.h 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/RegisteredHeaders.h 2022-08-11 10:59:26.000000000 -0700
@@ -111,6 +111,7 @@
X_SQUID_ERROR, /**< Squid custom header on generated error responses */
HDR_X_ACCELERATOR_VARY, /**< obsolete Squid custom header. */
X_NEXT_SERVICES, /**< Squid custom ICAP header */
+ X_MS_RANGE, /**< Used by Azure clients */
SURROGATE_CAPABILITY, /**< Edge Side Includes (ESI) header */
SURROGATE_CONTROL, /**< Edge Side Includes (ESI) header */
FRONT_END_HTTPS, /**< MS Exchange custom header we may have to add */
diff -Naur a/src/http/RegisteredHeadersHash.cci b/src/http/RegisteredHeadersHash.cci
--- a/src/http/RegisteredHeadersHash.cci 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/RegisteredHeadersHash.cci 2022-08-11 11:03:30.000000000 -0700
@@ -1,32 +1,32 @@
-/* C++ code produced by gperf version 3.1 */
-/* Command-line: gperf -m 100000 RegisteredHeadersHash.gperf */
+/* C++ code produced by gperf version 3.0.3 */
+/* Command-line: /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/gperf --output-file=RegisteredHeadersHash.cci -m 100000 RegisteredHeadersHash.gperf */
/* Computed positions: -k'1,9,$' */
#if !((' ' == 32) && ('!' == 33) && ('"' == 34) && ('#' == 35) \
-&& ('%' == 37) && ('&' == 38) && ('\'' == 39) && ('(' == 40) \
-&& (')' == 41) && ('*' == 42) && ('+' == 43) && (',' == 44) \
-&& ('-' == 45) && ('.' == 46) && ('/' == 47) && ('0' == 48) \
-&& ('1' == 49) && ('2' == 50) && ('3' == 51) && ('4' == 52) \
-&& ('5' == 53) && ('6' == 54) && ('7' == 55) && ('8' == 56) \
-&& ('9' == 57) && (':' == 58) && (';' == 59) && ('<' == 60) \
-&& ('=' == 61) && ('>' == 62) && ('?' == 63) && ('A' == 65) \
-&& ('B' == 66) && ('C' == 67) && ('D' == 68) && ('E' == 69) \
-&& ('F' == 70) && ('G' == 71) && ('H' == 72) && ('I' == 73) \
-&& ('J' == 74) && ('K' == 75) && ('L' == 76) && ('M' == 77) \
-&& ('N' == 78) && ('O' == 79) && ('P' == 80) && ('Q' == 81) \
-&& ('R' == 82) && ('S' == 83) && ('T' == 84) && ('U' == 85) \
-&& ('V' == 86) && ('W' == 87) && ('X' == 88) && ('Y' == 89) \
-&& ('Z' == 90) && ('[' == 91) && ('\\' == 92) && (']' == 93) \
-&& ('^' == 94) && ('_' == 95) && ('a' == 97) && ('b' == 98) \
-&& ('c' == 99) && ('d' == 100) && ('e' == 101) && ('f' == 102) \
-&& ('g' == 103) && ('h' == 104) && ('i' == 105) && ('j' == 106) \
-&& ('k' == 107) && ('l' == 108) && ('m' == 109) && ('n' == 110) \
-&& ('o' == 111) && ('p' == 112) && ('q' == 113) && ('r' == 114) \
-&& ('s' == 115) && ('t' == 116) && ('u' == 117) && ('v' == 118) \
-&& ('w' == 119) && ('x' == 120) && ('y' == 121) && ('z' == 122) \
-&& ('{' == 123) && ('|' == 124) && ('}' == 125) && ('~' == 126))
+ && ('%' == 37) && ('&' == 38) && ('\'' == 39) && ('(' == 40) \
+ && (')' == 41) && ('*' == 42) && ('+' == 43) && (',' == 44) \
+ && ('-' == 45) && ('.' == 46) && ('/' == 47) && ('0' == 48) \
+ && ('1' == 49) && ('2' == 50) && ('3' == 51) && ('4' == 52) \
+ && ('5' == 53) && ('6' == 54) && ('7' == 55) && ('8' == 56) \
+ && ('9' == 57) && (':' == 58) && (';' == 59) && ('<' == 60) \
+ && ('=' == 61) && ('>' == 62) && ('?' == 63) && ('A' == 65) \
+ && ('B' == 66) && ('C' == 67) && ('D' == 68) && ('E' == 69) \
+ && ('F' == 70) && ('G' == 71) && ('H' == 72) && ('I' == 73) \
+ && ('J' == 74) && ('K' == 75) && ('L' == 76) && ('M' == 77) \
+ && ('N' == 78) && ('O' == 79) && ('P' == 80) && ('Q' == 81) \
+ && ('R' == 82) && ('S' == 83) && ('T' == 84) && ('U' == 85) \
+ && ('V' == 86) && ('W' == 87) && ('X' == 88) && ('Y' == 89) \
+ && ('Z' == 90) && ('[' == 91) && ('\\' == 92) && (']' == 93) \
+ && ('^' == 94) && ('_' == 95) && ('a' == 97) && ('b' == 98) \
+ && ('c' == 99) && ('d' == 100) && ('e' == 101) && ('f' == 102) \
+ && ('g' == 103) && ('h' == 104) && ('i' == 105) && ('j' == 106) \
+ && ('k' == 107) && ('l' == 108) && ('m' == 109) && ('n' == 110) \
+ && ('o' == 111) && ('p' == 112) && ('q' == 113) && ('r' == 114) \
+ && ('s' == 115) && ('t' == 116) && ('u' == 117) && ('v' == 118) \
+ && ('w' == 119) && ('x' == 120) && ('y' == 121) && ('z' == 122) \
+ && ('{' == 123) && ('|' == 124) && ('}' == 125) && ('~' == 126))
/* The character set is not based on ISO-646. */
-#error "gperf generated tables don't work with this execution character set. Please report a bug to <[email protected]>."
+#error "gperf generated tables don't work with this execution character set. Please report a bug to <[email protected]>."
#endif
#line 1 "RegisteredHeadersHash.gperf"
@@ -42,26 +42,26 @@
*/
#line 24 "RegisteredHeadersHash.gperf"
class HeaderTableRecord;
- enum
-{
- TOTAL_KEYWORDS = 89,
+enum
+ {
+ TOTAL_KEYWORDS = 90,
MIN_WORD_LENGTH = 2,
MAX_WORD_LENGTH = 25,
- MIN_HASH_VALUE = 13,
- MAX_HASH_VALUE = 114
-};
+ MIN_HASH_VALUE = 7,
+ MAX_HASH_VALUE = 115
+ };
-/* maximum key range = 102, duplicates = 0 */
+/* maximum key range = 109, duplicates = 0 */
#ifndef GPERF_DOWNCASE
#define GPERF_DOWNCASE 1
static unsigned char gperf_downcase[256] =
-{
- 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
- 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
- 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
- 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
- 60, 61, 62, 63, 64, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
+ {
+ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14,
+ 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29,
+ 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44,
+ 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59,
+ 60, 61, 62, 63, 64, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106,
107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121,
122, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104,
105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119,
@@ -75,107 +75,106 @@
225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239,
240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254,
255
-};
+ };
#endif
#ifndef GPERF_CASE_MEMCMP
#define GPERF_CASE_MEMCMP 1
static int
-gperf_case_memcmp (const char *s1, const char *s2, size_t n)
+gperf_case_memcmp (register const char *s1, register const char *s2, register unsigned int n)
{
- for (; n > 0;)
+ for (; n > 0;)
{
- unsigned char c1 = gperf_downcase[(unsigned char)*s1++];
- unsigned char c2 = gperf_downcase[(unsigned char)*s2++];
- if (c1 == c2)
+ unsigned char c1 = gperf_downcase[(unsigned char)*s1++];
+ unsigned char c2 = gperf_downcase[(unsigned char)*s2++];
+ if (c1 == c2)
{
- n--;
- continue;
+ n--;
+ continue;
}
- return (int)c1 - (int)c2;
+ return (int)c1 - (int)c2;
}
- return 0;
+ return 0;
}
#endif
class HttpHeaderHashTable
{
private:
- static inline unsigned int HttpHeaderHash (const char *str, size_t len);
+ static inline unsigned int HttpHeaderHash (const char *str, unsigned int len);
public:
- static const class HeaderTableRecord *lookup (const char *str, size_t len);
+ static const class HeaderTableRecord *lookup (const char *str, unsigned int len);
};
inline unsigned int
-HttpHeaderHashTable::HttpHeaderHash (const char *str, size_t len)
+HttpHeaderHashTable::HttpHeaderHash (register const char *str, register unsigned int len)
{
- static const unsigned char asso_values[] =
+ static const unsigned char asso_values[] =
{
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 27, 115, 115, 4, 115, 115, 115, 115,
- 26, 115, 115, 33, 115, 115, 115, 115, 25, 115,
- 115, 115, 115, 115, 115, 15, 7, 7, 10, 4,
- 33, 66, 42, 22, 115, 63, 10, 33, 18, 44,
- 11, 115, 4, 28, 10, 42, 23, 26, 31, 30,
- 115, 115, 115, 115, 115, 115, 115, 15, 7, 7,
- 10, 4, 33, 66, 42, 22, 115, 63, 10, 33,
- 18, 44, 11, 115, 4, 28, 10, 42, 23, 26,
- 31, 30, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115, 115, 115, 115, 115,
- 115, 115, 115, 115, 115, 115
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 2, 116, 116, 33, 116, 116, 116, 116,
+ 64, 116, 116, 3, 116, 116, 116, 116, 20, 116,
+ 116, 116, 116, 116, 116, 18, 13, 4, 7, 1,
+ 36, 28, 35, 20, 116, 43, 24, 30, 6, 53,
+ 11, 116, 1, 20, 7, 18, 33, 65, 17, 45,
+ 116, 116, 116, 116, 116, 116, 116, 18, 13, 4,
+ 7, 1, 36, 28, 35, 20, 116, 43, 24, 30,
+ 6, 53, 11, 116, 1, 20, 7, 18, 33, 65,
+ 17, 45, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116, 116, 116, 116, 116,
+ 116, 116, 116, 116, 116, 116
};
- unsigned int hval = len;
+ register unsigned int hval = len;
- switch (hval)
+ switch (hval)
{
- default:
- hval += asso_values[static_cast<unsigned char>(str[8])];
- /*FALLTHROUGH*/
- case 8:
- case 7:
- case 6:
- case 5:
- case 4:
- case 3:
- case 2:
- case 1:
- hval += asso_values[static_cast<unsigned char>(str[0])];
+ default:
+ hval += asso_values[(unsigned char)str[8]];
+ /*FALLTHROUGH*/
+ case 8:
+ case 7:
+ case 6:
+ case 5:
+ case 4:
+ case 3:
+ case 2:
+ case 1:
+ hval += asso_values[(unsigned char)str[0]];
break;
}
- return hval + asso_values[static_cast<unsigned char>(str[len - 1])];
+ return hval + asso_values[(unsigned char)str[len - 1]];
}
static const unsigned char lengthtable[] =
-{
- 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5,
- 0, 7, 2, 6, 4, 5, 6, 7, 3, 0, 6, 13, 8, 9,
- 13, 11, 12, 6, 6, 12, 8, 9, 8, 16, 6, 7, 7, 3,
- 7, 18, 7, 13, 5, 18, 13, 15, 16, 16, 13, 7, 19, 13,
- 4, 4, 19, 17, 15, 13, 9, 16, 10, 17, 14, 19, 6, 11,
- 4, 13, 8, 14, 4, 6, 13, 4, 15, 10, 10, 14, 20, 18,
- 11, 19, 15, 11, 12, 10, 25, 12, 0, 16, 14, 0, 3, 17,
- 0, 7, 10, 0, 0, 0, 0, 10, 0, 13, 0, 0, 13, 21,
- 0, 10, 15
-};
+ {
+ 0, 0, 0, 0, 0, 0, 0, 5, 0, 7, 2, 6, 4, 5,
+ 6, 7, 13, 9, 9, 13, 11, 6, 3, 8, 12, 7, 7, 6,
+ 7, 8, 12, 6, 13, 4, 10, 6, 19, 18, 8, 16, 15, 10,
+ 13, 19, 7, 16, 4, 13, 11, 16, 16, 10, 15, 15, 3, 13,
+ 10, 13, 17, 9, 19, 18, 17, 8, 13, 6, 14, 15, 12, 13,
+ 4, 4, 11, 10, 14, 7, 14, 14, 15, 6, 12, 18, 4, 16,
+ 10, 17, 20, 10, 5, 0, 0, 3, 0, 21, 19, 0, 25, 0,
+ 13, 13, 7, 0, 0, 0, 0, 10, 0, 0, 0, 0, 0, 0,
+ 0, 0, 0, 11
+ };
static const class HeaderTableRecord HttpHeaderDefinitionsTable[] =
-{
- {""}, {""}, {""}, {""}, {""}, {""}, {""}, {""}, {""},
- {""}, {""}, {""}, {""},
+ {
+ {""}, {""}, {""}, {""}, {""}, {""}, {""},
#line 79 "RegisteredHeadersHash.gperf"
{"Range", Http::HdrType::RANGE, Http::HdrFieldType::ftPRange, HdrKind::RequestHeader},
{""},
@@ -193,195 +192,195 @@
{"Expect", Http::HdrType::EXPECT, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
#line 88 "RegisteredHeadersHash.gperf"
{"Trailer", Http::HdrType::TRAILER, Http::HdrFieldType::ftStr, HdrKind::HopByHopHeader},
-#line 31 "RegisteredHeadersHash.gperf"
- {"Age", Http::HdrType::AGE, Http::HdrFieldType::ftInt, HdrKind::ReplyHeader},
- {""},
-#line 78 "RegisteredHeadersHash.gperf"
- {"Public", Http::HdrType::PUBLIC, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
#line 81 "RegisteredHeadersHash.gperf"
{"Request-Range", Http::HdrType::REQUEST_RANGE, Http::HdrFieldType::ftPRange, HdrKind::None},
-#line 37 "RegisteredHeadersHash.gperf"
- {"CDN-Loop", Http::HdrType::CDN_LOOP, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
+#line 70 "RegisteredHeadersHash.gperf"
+ {"Negotiate", Http::HdrType::NEGOTIATE, Http::HdrFieldType::ftStr, HdrKind::None},
#line 90 "RegisteredHeadersHash.gperf"
{"Translate", Http::HdrType::TRANSLATE, Http::HdrFieldType::ftStr, HdrKind::None},
#line 46 "RegisteredHeadersHash.gperf"
{"Content-Range", Http::HdrType::CONTENT_RANGE, Http::HdrFieldType::ftPContRange, HdrKind::EntityHeader},
#line 82 "RegisteredHeadersHash.gperf"
{"Retry-After", Http::HdrType::RETRY_AFTER, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
-#line 39 "RegisteredHeadersHash.gperf"
- {"Content-Base", Http::HdrType::CONTENT_BASE, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
-#line 26 "RegisteredHeadersHash.gperf"
- {"Accept", Http::HdrType::ACCEPT, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
-#line 72 "RegisteredHeadersHash.gperf"
- {"Pragma", Http::HdrType::PRAGMA, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
+#line 78 "RegisteredHeadersHash.gperf"
+ {"Public", Http::HdrType::PUBLIC, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 31 "RegisteredHeadersHash.gperf"
+ {"Age", Http::HdrType::AGE, Http::HdrFieldType::ftInt, HdrKind::ReplyHeader},
+#line 37 "RegisteredHeadersHash.gperf"
+ {"CDN-Loop", Http::HdrType::CDN_LOOP, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
#line 47 "RegisteredHeadersHash.gperf"
{"Content-Type", Http::HdrType::CONTENT_TYPE, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
-#line 61 "RegisteredHeadersHash.gperf"
- {"If-Range", Http::HdrType::IF_RANGE, Http::HdrFieldType::ftDate_1123_or_ETag, HdrKind::None},
-#line 70 "RegisteredHeadersHash.gperf"
- {"Negotiate", Http::HdrType::NEGOTIATE, Http::HdrFieldType::ftStr, HdrKind::None},
-#line 67 "RegisteredHeadersHash.gperf"
- {"Location", Http::HdrType::LOCATION, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
-#line 42 "RegisteredHeadersHash.gperf"
- {"Content-Language", Http::HdrType::CONTENT_LANGUAGE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
+#line 98 "RegisteredHeadersHash.gperf"
+ {"X-Cache", Http::HdrType::X_CACHE, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 92 "RegisteredHeadersHash.gperf"
+ {"Upgrade", Http::HdrType::UPGRADE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader|HdrKind::HopByHopHeader},
#line 83 "RegisteredHeadersHash.gperf"
{"Server", Http::HdrType::SERVER, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
#line 53 "RegisteredHeadersHash.gperf"
{"Expires", Http::HdrType::EXPIRES, Http::HdrFieldType::ftDate_1123, HdrKind::EntityHeader},
-#line 49 "RegisteredHeadersHash.gperf"
- {"Cookie2", Http::HdrType::COOKIE2, Http::HdrFieldType::ftStr, HdrKind::None},
-#line 95 "RegisteredHeadersHash.gperf"
- {"Via", Http::HdrType::VIA, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
-#line 98 "RegisteredHeadersHash.gperf"
- {"X-Cache", Http::HdrType::X_CACHE, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 61 "RegisteredHeadersHash.gperf"
+ {"If-Range", Http::HdrType::IF_RANGE, Http::HdrFieldType::ftDate_1123_or_ETag, HdrKind::None},
+#line 39 "RegisteredHeadersHash.gperf"
+ {"Content-Base", Http::HdrType::CONTENT_BASE, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
+#line 26 "RegisteredHeadersHash.gperf"
+ {"Accept", Http::HdrType::ACCEPT, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
+#line 102 "RegisteredHeadersHash.gperf"
+ {"X-Squid-Error", Http::HdrType::X_SQUID_ERROR, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 51 "RegisteredHeadersHash.gperf"
+ {"ETag", Http::HdrType::ETAG, Http::HdrFieldType::ftETag, HdrKind::EntityHeader},
+#line 115 "RegisteredHeadersHash.gperf"
+ {"*INVALID*:", Http::HdrType::BAD_HDR, Http::HdrFieldType::ftInvalid, HdrKind::None},
+#line 72 "RegisteredHeadersHash.gperf"
+ {"Pragma", Http::HdrType::PRAGMA, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
+#line 40 "RegisteredHeadersHash.gperf"
+ {"Content-Disposition", Http::HdrType::CONTENT_DISPOSITION, Http::HdrFieldType::ftStr, HdrKind::None},
#line 73 "RegisteredHeadersHash.gperf"
{"Proxy-Authenticate", Http::HdrType::PROXY_AUTHENTICATE, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
-#line 110 "RegisteredHeadersHash.gperf"
- {"FTP-Pre", Http::HdrType::FTP_PRE, Http::HdrFieldType::ftStr, HdrKind::None},
+#line 67 "RegisteredHeadersHash.gperf"
+ {"Location", Http::HdrType::LOCATION, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 76 "RegisteredHeadersHash.gperf"
+ {"Proxy-Connection", Http::HdrType::PROXY_CONNECTION, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader|HdrKind::HopByHopHeader},
+#line 100 "RegisteredHeadersHash.gperf"
+ {"X-Forwarded-For", Http::HdrType::X_FORWARDED_FOR, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
+#line 93 "RegisteredHeadersHash.gperf"
+ {"User-Agent", Http::HdrType::USER_AGENT, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
#line 77 "RegisteredHeadersHash.gperf"
{"Proxy-support", Http::HdrType::PROXY_SUPPORT, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
-#line 32 "RegisteredHeadersHash.gperf"
- {"Allow", Http::HdrType::ALLOW, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
-#line 33 "RegisteredHeadersHash.gperf"
- {"Alternate-Protocol", Http::HdrType::ALTERNATE_PROTOCOL, Http::HdrFieldType::ftStr, HdrKind::HopByHopHeader},
+#line 75 "RegisteredHeadersHash.gperf"
+ {"Proxy-Authorization", Http::HdrType::PROXY_AUTHORIZATION, Http::HdrFieldType::ftStr, HdrKind::RequestHeader|HdrKind::HopByHopHeader},
+#line 111 "RegisteredHeadersHash.gperf"
+ {"FTP-Pre", Http::HdrType::FTP_PRE, Http::HdrFieldType::ftStr, HdrKind::None},
+#line 42 "RegisteredHeadersHash.gperf"
+ {"Content-Language", Http::HdrType::CONTENT_LANGUAGE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
+#line 56 "RegisteredHeadersHash.gperf"
+ {"Host", Http::HdrType::HOST, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
#line 36 "RegisteredHeadersHash.gperf"
{"Cache-Control", Http::HdrType::CACHE_CONTROL, Http::HdrFieldType::ftPCc, HdrKind::ListHeader|HdrKind::GeneralHeader},
-#line 29 "RegisteredHeadersHash.gperf"
- {"Accept-Language", Http::HdrType::ACCEPT_LANGUAGE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
-#line 97 "RegisteredHeadersHash.gperf"
- {"WWW-Authenticate", Http::HdrType::WWW_AUTHENTICATE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 45 "RegisteredHeadersHash.gperf"
+ {"Content-MD5", Http::HdrType::CONTENT_MD5, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
+#line 41 "RegisteredHeadersHash.gperf"
+ {"Content-Encoding", Http::HdrType::CONTENT_ENCODING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
#line 44 "RegisteredHeadersHash.gperf"
{"Content-Location", Http::HdrType::CONTENT_LOCATION, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
-#line 102 "RegisteredHeadersHash.gperf"
- {"X-Squid-Error", Http::HdrType::X_SQUID_ERROR, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
-#line 92 "RegisteredHeadersHash.gperf"
- {"Upgrade", Http::HdrType::UPGRADE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader|HdrKind::HopByHopHeader},
-#line 40 "RegisteredHeadersHash.gperf"
- {"Content-Disposition", Http::HdrType::CONTENT_DISPOSITION, Http::HdrFieldType::ftStr, HdrKind::None},
-#line 65 "RegisteredHeadersHash.gperf"
- {"Last-Modified", Http::HdrType::LAST_MODIFIED, Http::HdrFieldType::ftDate_1123, HdrKind::EntityHeader},
-#line 56 "RegisteredHeadersHash.gperf"
- {"Host", Http::HdrType::HOST, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
-#line 94 "RegisteredHeadersHash.gperf"
- {"Vary", Http::HdrType::VARY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
-#line 75 "RegisteredHeadersHash.gperf"
- {"Proxy-Authorization", Http::HdrType::PROXY_AUTHORIZATION, Http::HdrFieldType::ftStr, HdrKind::RequestHeader|HdrKind::HopByHopHeader},
-#line 106 "RegisteredHeadersHash.gperf"
- {"Surrogate-Control", Http::HdrType::SURROGATE_CONTROL, Http::HdrFieldType::ftPSc, HdrKind::ListHeader|HdrKind::ReplyHeader},
-#line 100 "RegisteredHeadersHash.gperf"
- {"X-Forwarded-For", Http::HdrType::X_FORWARDED_FOR, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
-#line 35 "RegisteredHeadersHash.gperf"
- {"Authorization", Http::HdrType::AUTHORIZATION, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
-#line 54 "RegisteredHeadersHash.gperf"
- {"Forwarded", Http::HdrType::FORWARDED, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
-#line 76 "RegisteredHeadersHash.gperf"
- {"Proxy-Connection", Http::HdrType::PROXY_CONNECTION, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader|HdrKind::HopByHopHeader},
#line 84 "RegisteredHeadersHash.gperf"
{"Set-Cookie", Http::HdrType::SET_COOKIE, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 29 "RegisteredHeadersHash.gperf"
+ {"Accept-Language", Http::HdrType::ACCEPT_LANGUAGE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
+#line 104 "RegisteredHeadersHash.gperf"
+ {"X-Next-Services", Http::HdrType::X_NEXT_SERVICES, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 95 "RegisteredHeadersHash.gperf"
+ {"Via", Http::HdrType::VIA, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
+#line 35 "RegisteredHeadersHash.gperf"
+ {"Authorization", Http::HdrType::AUTHORIZATION, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
+#line 105 "RegisteredHeadersHash.gperf"
+ {"x-ms-range", Http::HdrType::X_MS_RANGE, Http::HdrFieldType::ftPRange, HdrKind::None},
+#line 101 "RegisteredHeadersHash.gperf"
+ {"X-Request-URI", Http::HdrType::X_REQUEST_URI, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
#line 59 "RegisteredHeadersHash.gperf"
{"If-Modified-Since", Http::HdrType::IF_MODIFIED_SINCE, Http::HdrFieldType::ftDate_1123, HdrKind::RequestHeader},
-#line 99 "RegisteredHeadersHash.gperf"
- {"X-Cache-Lookup", Http::HdrType::X_CACHE_LOOKUP, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 54 "RegisteredHeadersHash.gperf"
+ {"Forwarded", Http::HdrType::FORWARDED, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader},
#line 62 "RegisteredHeadersHash.gperf"
{"If-Unmodified-Since", Http::HdrType::IF_UNMODIFIED_SINCE, Http::HdrFieldType::ftDate_1123, HdrKind::None},
+#line 33 "RegisteredHeadersHash.gperf"
+ {"Alternate-Protocol", Http::HdrType::ALTERNATE_PROTOCOL, Http::HdrFieldType::ftStr, HdrKind::HopByHopHeader},
+#line 107 "RegisteredHeadersHash.gperf"
+ {"Surrogate-Control", Http::HdrType::SURROGATE_CONTROL, Http::HdrFieldType::ftPSc, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 58 "RegisteredHeadersHash.gperf"
+ {"If-Match", Http::HdrType::IF_MATCH, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
+#line 65 "RegisteredHeadersHash.gperf"
+ {"Last-Modified", Http::HdrType::LAST_MODIFIED, Http::HdrFieldType::ftDate_1123, HdrKind::EntityHeader},
#line 71 "RegisteredHeadersHash.gperf"
{"Origin", Http::HdrType::ORIGIN, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
-#line 108 "RegisteredHeadersHash.gperf"
- {"FTP-Command", Http::HdrType::FTP_COMMAND, Http::HdrFieldType::ftStr, HdrKind::None},
-#line 55 "RegisteredHeadersHash.gperf"
- {"From", Http::HdrType::FROM, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
+#line 99 "RegisteredHeadersHash.gperf"
+ {"X-Cache-Lookup", Http::HdrType::X_CACHE_LOOKUP, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 28 "RegisteredHeadersHash.gperf"
+ {"Accept-Encoding", Http::HdrType::ACCEPT_ENCODING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader|HdrKind::ReplyHeader},
+#line 69 "RegisteredHeadersHash.gperf"
+ {"Mime-Version", Http::HdrType::MIME_VERSION, Http::HdrFieldType::ftStr, HdrKind::GeneralHeader},
#line 30 "RegisteredHeadersHash.gperf"
{"Accept-Ranges", Http::HdrType::ACCEPT_RANGES, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
-#line 58 "RegisteredHeadersHash.gperf"
- {"If-Match", Http::HdrType::IF_MATCH, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
-#line 43 "RegisteredHeadersHash.gperf"
- {"Content-Length", Http::HdrType::CONTENT_LENGTH, Http::HdrFieldType::ftInt64, HdrKind::EntityHeader},
-#line 51 "RegisteredHeadersHash.gperf"
- {"ETag", Http::HdrType::ETAG, Http::HdrFieldType::ftETag, HdrKind::EntityHeader},
-#line 113 "RegisteredHeadersHash.gperf"
- {"Other:", Http::HdrType::OTHER, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
-#line 101 "RegisteredHeadersHash.gperf"
- {"X-Request-URI", Http::HdrType::X_REQUEST_URI, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 55 "RegisteredHeadersHash.gperf"
+ {"From", Http::HdrType::FROM, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
#line 66 "RegisteredHeadersHash.gperf"
{"Link", Http::HdrType::LINK, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
-#line 104 "RegisteredHeadersHash.gperf"
- {"X-Next-Services", Http::HdrType::X_NEXT_SERVICES, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 109 "RegisteredHeadersHash.gperf"
+ {"FTP-Command", Http::HdrType::FTP_COMMAND, Http::HdrFieldType::ftStr, HdrKind::None},
#line 38 "RegisteredHeadersHash.gperf"
{"Connection", Http::HdrType::CONNECTION, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader|HdrKind::HopByHopHeader},
-#line 93 "RegisteredHeadersHash.gperf"
- {"User-Agent", Http::HdrType::USER_AGENT, Http::HdrFieldType::ftStr, HdrKind::RequestHeader},
#line 27 "RegisteredHeadersHash.gperf"
{"Accept-Charset", Http::HdrType::ACCEPT_CHARSET, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
-#line 105 "RegisteredHeadersHash.gperf"
- {"Surrogate-Capability", Http::HdrType::SURROGATE_CAPABILITY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
-#line 103 "RegisteredHeadersHash.gperf"
- {"X-Accelerator-Vary", Http::HdrType::HDR_X_ACCELERATOR_VARY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
-#line 45 "RegisteredHeadersHash.gperf"
- {"Content-MD5", Http::HdrType::CONTENT_MD5, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
-#line 34 "RegisteredHeadersHash.gperf"
- {"Authentication-Info", Http::HdrType::AUTHENTICATION_INFO, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
-#line 107 "RegisteredHeadersHash.gperf"
+#line 49 "RegisteredHeadersHash.gperf"
+ {"Cookie2", Http::HdrType::COOKIE2, Http::HdrFieldType::ftStr, HdrKind::None},
+#line 57 "RegisteredHeadersHash.gperf"
+ {"HTTP2-Settings", Http::HdrType::HTTP2_SETTINGS, Http::HdrFieldType::ftStr, HdrKind::RequestHeader|HdrKind::HopByHopHeader},
+#line 43 "RegisteredHeadersHash.gperf"
+ {"Content-Length", Http::HdrType::CONTENT_LENGTH, Http::HdrFieldType::ftInt64, HdrKind::EntityHeader},
+#line 108 "RegisteredHeadersHash.gperf"
{"Front-End-Https", Http::HdrType::FRONT_END_HTTPS, Http::HdrFieldType::ftStr, HdrKind::None},
-#line 85 "RegisteredHeadersHash.gperf"
- {"Set-Cookie2", Http::HdrType::SET_COOKIE2, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader},
+#line 114 "RegisteredHeadersHash.gperf"
+ {"Other:", Http::HdrType::OTHER, Http::HdrFieldType::ftStr, HdrKind::EntityHeader},
#line 68 "RegisteredHeadersHash.gperf"
{"Max-Forwards", Http::HdrType::MAX_FORWARDS, Http::HdrFieldType::ftInt64, HdrKind::RequestHeader},
-#line 114 "RegisteredHeadersHash.gperf"
- {"*INVALID*:", Http::HdrType::BAD_HDR, Http::HdrFieldType::ftInvalid, HdrKind::None},
-#line 74 "RegisteredHeadersHash.gperf"
- {"Proxy-Authentication-Info", Http::HdrType::PROXY_AUTHENTICATION_INFO, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
-#line 69 "RegisteredHeadersHash.gperf"
- {"Mime-Version", Http::HdrType::MIME_VERSION, Http::HdrFieldType::ftStr, HdrKind::GeneralHeader},
- {""},
-#line 41 "RegisteredHeadersHash.gperf"
- {"Content-Encoding", Http::HdrType::CONTENT_ENCODING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
-#line 57 "RegisteredHeadersHash.gperf"
- {"HTTP2-Settings", Http::HdrType::HTTP2_SETTINGS, Http::HdrFieldType::ftStr, HdrKind::RequestHeader|HdrKind::HopByHopHeader},
- {""},
-#line 64 "RegisteredHeadersHash.gperf"
- {"Key", Http::HdrType::KEY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 103 "RegisteredHeadersHash.gperf"
+ {"X-Accelerator-Vary", Http::HdrType::HDR_X_ACCELERATOR_VARY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 94 "RegisteredHeadersHash.gperf"
+ {"Vary", Http::HdrType::VARY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 97 "RegisteredHeadersHash.gperf"
+ {"WWW-Authenticate", Http::HdrType::WWW_AUTHENTICATE, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 112 "RegisteredHeadersHash.gperf"
+ {"FTP-Status", Http::HdrType::FTP_STATUS, Http::HdrFieldType::ftInt, HdrKind::None},
#line 89 "RegisteredHeadersHash.gperf"
{"Transfer-Encoding", Http::HdrType::TRANSFER_ENCODING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::GeneralHeader|HdrKind::HopByHopHeader},
- {""},
-#line 96 "RegisteredHeadersHash.gperf"
- {"Warning", Http::HdrType::WARNING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+#line 106 "RegisteredHeadersHash.gperf"
+ {"Surrogate-Capability", Http::HdrType::SURROGATE_CAPABILITY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader},
#line 63 "RegisteredHeadersHash.gperf"
{"Keep-Alive", Http::HdrType::KEEP_ALIVE, Http::HdrFieldType::ftStr, HdrKind::HopByHopHeader},
- {""}, {""}, {""}, {""},
-#line 112 "RegisteredHeadersHash.gperf"
- {"FTP-Reason", Http::HdrType::FTP_REASON, Http::HdrFieldType::ftStr, HdrKind::None},
- {""},
-#line 109 "RegisteredHeadersHash.gperf"
- {"FTP-Arguments", Http::HdrType::FTP_ARGUMENTS, Http::HdrFieldType::ftStr, HdrKind::None},
+#line 32 "RegisteredHeadersHash.gperf"
+ {"Allow", Http::HdrType::ALLOW, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::EntityHeader},
{""}, {""},
-#line 60 "RegisteredHeadersHash.gperf"
- {"If-None-Match", Http::HdrType::IF_NONE_MATCH, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
+#line 64 "RegisteredHeadersHash.gperf"
+ {"Key", Http::HdrType::KEY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+ {""},
#line 91 "RegisteredHeadersHash.gperf"
{"Unless-Modified-Since", Http::HdrType::UNLESS_MODIFIED_SINCE, Http::HdrFieldType::ftStr, HdrKind::None},
+#line 34 "RegisteredHeadersHash.gperf"
+ {"Authentication-Info", Http::HdrType::AUTHENTICATION_INFO, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
{""},
-#line 111 "RegisteredHeadersHash.gperf"
- {"FTP-Status", Http::HdrType::FTP_STATUS, Http::HdrFieldType::ftInt, HdrKind::None},
-#line 28 "RegisteredHeadersHash.gperf"
- {"Accept-Encoding", Http::HdrType::ACCEPT_ENCODING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader|HdrKind::ReplyHeader}
-};
+#line 74 "RegisteredHeadersHash.gperf"
+ {"Proxy-Authentication-Info", Http::HdrType::PROXY_AUTHENTICATION_INFO, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
+ {""},
+#line 60 "RegisteredHeadersHash.gperf"
+ {"If-None-Match", Http::HdrType::IF_NONE_MATCH, Http::HdrFieldType::ftStr, HdrKind::ListHeader},
+#line 110 "RegisteredHeadersHash.gperf"
+ {"FTP-Arguments", Http::HdrType::FTP_ARGUMENTS, Http::HdrFieldType::ftStr, HdrKind::None},
+#line 96 "RegisteredHeadersHash.gperf"
+ {"Warning", Http::HdrType::WARNING, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader},
+ {""}, {""}, {""}, {""},
+#line 113 "RegisteredHeadersHash.gperf"
+ {"FTP-Reason", Http::HdrType::FTP_REASON, Http::HdrFieldType::ftStr, HdrKind::None},
+ {""}, {""}, {""}, {""}, {""}, {""}, {""}, {""}, {""},
+#line 85 "RegisteredHeadersHash.gperf"
+ {"Set-Cookie2", Http::HdrType::SET_COOKIE2, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader}
+ };
const class HeaderTableRecord *
- HttpHeaderHashTable::lookup (const char *str, size_t len)
+HttpHeaderHashTable::lookup (register const char *str, register unsigned int len)
{
- if (len <= MAX_WORD_LENGTH && len >= MIN_WORD_LENGTH)
+ if (len <= MAX_WORD_LENGTH && len >= MIN_WORD_LENGTH)
{
- unsigned int key = HttpHeaderHash (str, len);
+ unsigned int key = HttpHeaderHash (str, len);
- if (key <= MAX_HASH_VALUE)
- if (len == lengthtable[key])
- {
- const char *s = HttpHeaderDefinitionsTable[key].name;
-
- if ((((unsigned char)*str ^ (unsigned char)*s) & ~32) == 0 && !gperf_case_memcmp (str, s, len))
- return &HttpHeaderDefinitionsTable[key];
- }
+ if (key <= MAX_HASH_VALUE)
+ if (len == lengthtable[key])
+ {
+ register const char *s = HttpHeaderDefinitionsTable[key].name;
+
+ if ((((unsigned char)*str ^ (unsigned char)*s) & ~32) == 0 && !gperf_case_memcmp (str, s, len))
+ return &HttpHeaderDefinitionsTable[key];
+ }
}
- return 0;
+ return 0;
}
-#line 115 "RegisteredHeadersHash.gperf"
+#line 116 "RegisteredHeadersHash.gperf"
diff -Naur a/src/http/RegisteredHeadersHash.gperf b/src/http/RegisteredHeadersHash.gperf
--- a/src/http/RegisteredHeadersHash.gperf 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/RegisteredHeadersHash.gperf 2022-08-11 11:00:16.000000000 -0700
@@ -102,6 +102,7 @@
X-Squid-Error, Http::HdrType::X_SQUID_ERROR, Http::HdrFieldType::ftStr, HdrKind::ReplyHeader
X-Accelerator-Vary, Http::HdrType::HDR_X_ACCELERATOR_VARY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader
X-Next-Services, Http::HdrType::X_NEXT_SERVICES, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::ReplyHeader
+x-ms-range, Http::HdrType::X_MS_RANGE, Http::HdrFieldType::ftPRange, HdrKind::None
Surrogate-Capability, Http::HdrType::SURROGATE_CAPABILITY, Http::HdrFieldType::ftStr, HdrKind::ListHeader|HdrKind::RequestHeader
Surrogate-Control, Http::HdrType::SURROGATE_CONTROL, Http::HdrFieldType::ftPSc, HdrKind::ListHeader|HdrKind::ReplyHeader
Front-End-Https, Http::HdrType::FRONT_END_HTTPS, Http::HdrFieldType::ftStr, HdrKind::None
diff -Naur a/src/http/Stream.cc b/src/http/Stream.cc
--- a/src/http/Stream.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/Stream.cc 2022-08-12 16:06:20.000000000 -0700
@@ -82,7 +82,9 @@
switch (socketState()) {
case STREAM_NONE:
- pullData();
+ if (!needsStallUntilEnd()) {
+ pullData();
+ }
break;
case STREAM_COMPLETE: {
@@ -128,6 +130,32 @@
}
bool
+Http::Stream::needsStallUntilEnd()
+{
+ const StoreEntry *entry = http->storeEntry();
+ /* ignore if we don't have a range or reply or content length or entry */
+ if (!http->request->range || !reply || !reply->content_length || !entry) {
+ return false;
+ }
+
+ int64_t roffLimit = http->request->getRangeOffsetLimit();
+ debugs(33, 5, reply << " has range limit " << roffLimit);
+
+ if (reply->content_length + reply->hdr_sz == entry->objectLen() ||
+ http->request->range->offsetLimitExceeded(roffLimit)) {
+ debugs(33, 5, reply << " unstalled from sending response");
+ return false;
+ }
+
+ StoreIOBuffer readBuffer;
+ readBuffer.offset = reply->content_length;
+ debugs(33, 5, reply << " stalling until we recieved all data");
+ clientStreamRead(getTail(), http, readBuffer);
+
+ return true;
+}
+
+bool
Http::Stream::multipartRangeRequest() const
{
return http->multipartRangeRequest();
diff -Naur a/src/http/Stream.h b/src/http/Stream.h
--- a/src/http/Stream.h 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http/Stream.h 2022-08-12 12:34:56.000000000 -0700
@@ -90,6 +90,9 @@
/// get more data to send
void pullData();
+ /// handles when client needs a partial response and we cache the whole thing
+ bool needsStallUntilEnd();
+
/// \return true if the HTTP request is for multiple ranges
bool multipartRangeRequest() const;
diff -Naur a/src/http.cc b/src/http.cc
--- a/src/http.cc 2022-06-05 15:11:52.000000000 -0700
+++ b/src/http.cc 2022-08-11 10:58:53.000000000 -0700
@@ -2251,6 +2251,8 @@
case Http::HdrType::IF_RANGE:
case Http::HdrType::REQUEST_RANGE:
+
+ case Http::HdrType::X_MS_RANGE:
/** \par Range:, If-Range:, Request-Range:
* Only pass if we accept ranges */
if (!we_do_ranges)
@2Fast2BCn
Copy link

2Fast2BCn commented Nov 17, 2023

Is there a docker image somewhere that would make it very easy to be used?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment