Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save Kestrer/24c60b3920af365e83e1476f678b622a to your computer and use it in GitHub Desktop.
Save Kestrer/24c60b3920af365e83e1476f678b622a to your computer and use it in GitHub Desktop.
ChaiNNer with remote backend – Fly.io-powered

This document instructs how to set up a Fly.io instance for personal use that runs ChaiNNer. Fly.io has several advantages, namely that

  • it is pay for what you use, and thus very cheap, and if you spend <$5 in a month, totally free, and
  • if your account has GPUs enabled (which you have to email support for), you get access to very powerful GPUs such as the L40S, which we will be using here.

Credit to the document ChaiNNer with remote backend for providing the basic instructions.

Requirements:

  • A Fly.io account with GPUs enabled.
  • A Unix system with socat, Unison and NPM installed. It only requires Unix for shell scripting, so could probably work on other OSes without much effort – maybe you could translate the code to Python. Note that some repositories – such as Ubuntu’s – ship a gutted version of Unison that lacks the features these instructions require (in particular, the unison-fsmonitor binary), so installing it from scratch would be recommended.

Setting up the server's files

Make a new directory for the server. Make sure it has a stable path.

Start by making a fly.toml config file for our app:

app = '{YOUR USERNAME}-chainner'
primary_region = 'ord'

[[services]]
internal_port = 1234
protocol = "tcp"
auto_stop_machines = "stop"
auto_start_machines = true
min_machines_running = 0

[[services.ports]]
handlers = []
port = 1234

[[services]]
internal_port = 8000
protocol = "tcp"
auto_stop_machines = "stop"
auto_start_machines = true
min_machines_running = 0

[[services.ports]]
handlers = []
port = 8000

[[vm]]
size = 'l40s'

Replace {YOURUSERNAME} with your username. Actually, you can call the app anything you like, but be aware that app names are global.

We choose ord as the primary region, as it is the only region with l40s support. We register two services on ports 1234 and 8000 for Unison and ChaiNNer respectively.

Technically, the entire services section is unnecessary; it is only needed if you want to connect over Flycast instead of directly. The main advantage of using Flycast is that Fly.io will automatically shut down your machines when not in use, ensuring that you won’t accidentally rack up a huge bill.

If there is a set of models that you expect yourself to commonly use, it makes sense to put it directly in the app’s image. So make a subdirectory – here we call it Models – and put whatever files you like in there.

Next is the all-important Dockerfile.

FROM debian:bookworm-slim
ENV DEBIAN_FRONTEND=noninteractive
RUN rm -f /etc/apt/apt.conf.d/docker-clean; \
	echo 'Binary::apt::APT::Keep-Downloaded-Packages "true";' > /etc/apt/apt.conf.d/keep-cache
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
	--mount=type=cache,target=/var/lib/apt,sharing=locked \
	apt-get update && \
	apt-get install -y --no-install-recommends git curl ca-certificates libglib2.0
RUN git clone https://github.com/chaiNNer-org/chaiNNer && \
	cd chaiNNer && \
	git checkout 4f056759bda3ef1bb0c66c88d033970667cf6947

# Manually install Unison (to sync folders) from sources cause Unison in
# the APT repository does not contain 'unison-fsmonitor'.
ARG UNISON=2.53.7
RUN mkdir /tmp/unison && cd /tmp/unison \
	&& curl -Lo unison.tar.gz https://github.com/bcpierce00/unison/releases/download/v$UNISON/unison-$UNISON-ubuntu-x86_64-static.tar.gz  \
	&& tar -xzvf unison.tar.gz \
	&& cp ./bin/* /usr/local/bin \
	&& rm -rf *

COPY --from=ghcr.io/astral-sh/uv:0.5.27 /uv /uvx /bin/
ENV UV_LINK_MODE=copy

COPY /chainner.diff /
RUN --mount=type=cache,target=/root/.cache/uv cd chaiNNer && \
	git apply /chainner.diff && \
	uv venv --python 3.11 && \
	uv pip install setuptools && \
	uv pip install -r requirements.txt && \
	uv run backend/src/run.py --install-builtin-packages --close-after-start

RUN mkdir -p /path/to/your/server/directory/Models
COPY Models /path/to/your/server/directory/Models

COPY /init.sh /
ENTRYPOINT ["/init.sh"]

The Dockerfile

  • downloads some basic dependencies needed in the Dockerfile itself, as well as libglib2.0, which is depended on by one of ChaiNNer’s Python dependencies;
  • clones the ChaiNNer repo itself, checks it out to the latest commit at the time of writing, and applies the patch given in this Gist;
  • installs Unison and uv;

    uv isn’t strictly necessary, but it is much much faster than Pip, so I use it here.

  • installs all the basic packages needed in the server (stuff like Pytorch, NCNN, etc – this step will take a while), and
  • copies over your common models into the Models directory.

Make sure to replace /path/to/your/server/directory with its actual path.

The patch applied to ChaiNNer does several things:

  • Pip is replaced with uv (as part of this, various other changes are made, such as the deletion of pyproject.toml and the renaming of spandrel_extra_arches to its “real” PyPI name, spandrel-extra-arches);
  • typing_extensions is updated, to work around a bug;
  • opencv-python is replaced with opencv-python-headless, which avoids the need for several dependencies;
  • a step is hacked into the server to uninstall opencv-python (which is implicitly installed as it is depended on by ncnn-vulkan and facexlib) and replace it fully with opencv-python-headless;
  • most of dependency installation code is replaced with simpler versions, since we don’t get progress bars anyway with Docker – this is not strictly necessary but makes things much easier to debug;
  • the code that auto-detects whether an NVIDIA graphics card is present is replaced with code that unconditionally uses CUDA, which is necessary as the Fly.io builder servers do not have graphics cards, and so otherwise the Dockerfile would erroneously install CPU-only PyTorch;
  • PyTorch is upgraded from 2.1.2 to 2.6.0, as 2.1.2 supports a maximum CUDA version of 12.1, while the L40S requires at least CUDA 12.2, and
  • the HTTP server is modified to listen using dual-stack IPv6 instead of only IPv4, which enables connecting to the server directly if you want (and not just over Flycast).

The init.sh file is the entrypoint of the image, and is rather minimal:

#!/bin/sh

unison -socket 1234 &

cd chaiNNer
uv run backend/src/run.py 8000

We just start the Unison server listening on socket 1234, and also start the ChaiNNer backend itself.

Launching the server

After installing flyctl and logging in to your Fly.io account, use fly launch --flycast --copy-config to create the app. --flycast configures the app for accessing it over Flycast, and its inclusion is equivalent to running fly ips allocate-v6 --private after the fact. --copy-config is needed because the fly.toml has already been made. You might have to now run fly deploy to get the app actually up and running.

The command may decide to make you two Machines, in which case you can avoid paying for them both by destroying one of them – fly machine list will show their IDs, and fly machine destroy {ID} will get rid of it.

You’ll want to connect to your Fly.io VPN now, so you can actually access the apps. They’re not exposed to the public by default (and for good reason).

The client app

Find some directory and clone the ChaiNNer repository in there. In this example, we assume it is present at ~/.local/src/chaiNNer. Put following shell script somewhere in your $PATH, for example at ~/.local/bin/chainner-client.

#!/bin/sh
set -eu

app_name={YOUR USERNAME}-chainner
machine_id={YOUR MACHINE ID}
remote_host={YOUR FLYCAST IPV6 ADDRESS}

trap "exit" INT
all_traps=
add_trap() {
	all_traps="$1;$all_traps"
	trap "$all_traps" EXIT
}

# set to `false` for debugging (see Debugging section)
if true; then
	host="[$remote_host]"

	socat TCP-LISTEN:8000,fork,reuseaddr "TCP:$host:8000" &
	proxy=$!
	add_trap "kill $proxy"

	fly machines -a $app_name start $machine_id
	add_trap "fly machines -a $app_name stop $machine_id"

	fly ssh console -a $app_name -C "sh -c \"rm -rf '$PWD' && mkdir -p '$PWD'\""
else
	host='127.0.0.1'
	docker exec chainner sh -c "rm -rf '$PWD' && mkdir -p '$PWD'"
fi

unison . socket://$host:1234$PWD -batch -auto -repeat watch -ignore 'Name *.kra' -ignorearchives &
unison=$!
add_trap "kill -s INT $unison"

(cd ~/.local/src/chaiNNer && npm run frontend || true)

Set app_name to the same app name as above; set machine_id to the ID of your machine, which can be obtained by running fly machine list in the server directory (or, more directly, by fly machine list -a $app_name | grep l40s | awk '{ print $1 }'); set remote_host to the Flycast IP address which can be obtained by running fly ips list in the server directory (or, more directly, by fly ips list -a $app_name | grep v6 | awk '{ print $2 }').

Then enter into any directory which has the images you want to process and run the script. After some waiting while things get set up you should be able to use ChaiNNer like normal, with the one caveat being that you cannot read or write to images outside of the directory in which you ran the script – so make sure you stick to that directory. You also have read-only access to the models in the Models directory.

The script uses Socat to establish a TCP proxy between your local port 8000, which ChaiNNer expects to connect to the server on, and the remote port 8000. ChaiNNer’s frontend does expose an (unstable) --remote-host CLI option to which a URL like http://$host could be directly passed, but I could never get this to work – perhaps due to the use of IPv6.

The script uses Unison to synchronize the contents of the directory you ran the script in with its newly-made equivalent on the remote machine. In this example, we pass in -ignore 'Name *.kra' to avoid transferring all Krita files to the server; adjust this option to your liking, and read through the Unison manual for information about the syntax. The -repeat watch option ensures that synchronization is continuous; thus output files from the remote will immediately appear on your machine. -ignorearchives ensures that Unison doesn’t assume that the server is persistent, since in our case its filesystem is ephemeral.

In theory, one should be able to set remote_host=$app_name.flycast, but I found this to not work in practice, and I’m not sure if it’s a Fly.io bug. To bypass Flycast and use a direct connection one can set remote_host to the IP address of the machine, which is shown in the fly machine list table (again, one can in theory use remote_host=$app_name.internal for this purpose but I found that unreliable). Also in theory one does not need to explicitly start the machine as I do here, as Flycast should handle that, but I start it anyway.

Debugging

I recommend placing the following script in the server directory for running and debugging the setup locally (requires Docker):

#!/bin/sh

set -eu
cd "$(dirname "$0")"

git --work-tree ~/.local/src/chaiNNer --git-dir ~/.local/src/chaiNNer/.git diff > chainner.diff
docker container kill chainner || true
docker container rm chainner || true

trap exit INT
trap "docker container kill chainner && docker container rm chainner" EXIT

docker buildx build -t chainner . --progress=plain
docker run --name=chainner -p 127.0.0.1:1234:1234 -p 127.0.0.1:8000:8000 chainner

Edit the client script above to use if false instead of if true to set it up for Docker.

Possible improvements

You might notice starting the server up is quite slow, and if you check the logs it gets stuck at the [Worker] Loading Nodes... phase. I don’t know the internals of ChaiNNer, so I don’t really know what’s happening there.

The server also complains that ONNX Runtime (GPU) is missing. I’m not sure what’s up with that. There is an error [Worker] vkCreateInstance failed -9 that appears in the logs, so maybe it’s related.

diff --git a/backend/src/dependencies/install_server_deps.py b/backend/src/dependencies/install_server_deps.py
index e33026f9..4c11c467 100644
--- a/backend/src/dependencies/install_server_deps.py
+++ b/backend/src/dependencies/install_server_deps.py
@@ -7,22 +7,12 @@ from .store import (
DependencyInfo,
install_dependencies_sync,
installed_packages,
- python_path,
)
# Get the list of installed packages
# We can't rely on using the package's __version__ attribute because not all packages actually have it
try:
- pip_list = subprocess.check_output(
- [
- python_path,
- "-m",
- "pip",
- "list",
- "--format=json",
- "--disable-pip-version-check",
- ]
- )
+ pip_list = subprocess.check_output(["uv", "pip", "list", "--format=json"])
for p in json_parse(pip_list):
installed_packages[p["name"]] = p["version"]
except Exception as e:
@@ -70,8 +60,7 @@ deps: list[DependencyInfo] = [
# Other deps necessary for general use
DependencyInfo(
package_name="typing_extensions",
- version="4.6.2",
- from_file="typing_extensions-4.6.3-py3-none-any.whl",
+ version="4.11.0",
),
DependencyInfo(
package_name="pynvml",
diff --git a/backend/src/dependencies/store.py b/backend/src/dependencies/store.py
index fd163804..10f3fd81 100644
--- a/backend/src/dependencies/store.py
+++ b/backend/src/dependencies/store.py
@@ -11,7 +11,6 @@ from typing import Iterable
from custom_types import UpdateProgressFn
-python_path = sys.executable
dir_path = os.path.dirname(os.path.realpath(__file__))
installed_packages: dict[str, str] = {}
@@ -88,19 +87,17 @@ def install_dependencies_sync(
if dep_info.extra_index_url
}
- extra_index_args = []
+ extra_index_args = ["--extra-index-url", "https://pypi.org/simple"]
if len(extra_index_urls) > 0:
extra_index_args.extend(["--extra-index-url", ",".join(extra_index_urls)])
+ extra_index_args.extend(["--index-strategy", "unsafe-best-match"])
exit_code = subprocess.check_call(
[
- python_path,
- "-m",
+ "uv",
"pip",
"install",
*[pin(dep_info) for dep_info in dependencies_to_install],
- "--disable-pip-version-check",
- "--no-warn-script-location",
*extra_index_args,
],
env=ENV,
@@ -108,6 +105,12 @@ def install_dependencies_sync(
if exit_code != 0:
raise ValueError("An error occurred while installing dependencies.")
+ subprocess.check_call(["uv", "pip", "uninstall", "opencv-python"], env=ENV)
+ subprocess.check_call(
+ ["uv", "pip", "install", "--reinstall", "opencv-python-headless==4.8.0.76", "numpy==1.24.4"],
+ env=ENV,
+ )
+
for dep_info in dependencies_to_install:
installed_packages[dep_info.package_name] = dep_info.version
@@ -119,126 +122,7 @@ async def install_dependencies(
update_progress_cb: UpdateProgressFn | None = None,
logger: Logger | None = None,
):
- # If there's no progress callback, just install the dependencies synchronously
- if update_progress_cb is None:
- return install_dependencies_sync(dependencies)
-
- dependencies_to_install = filter_necessary_to_install(dependencies)
- if len(dependencies_to_install) == 0:
- return 0
-
- dependency_name_map = {
- dep_info.package_name: dep_info.display_name or dep_info.package_name
- for dep_info in dependencies_to_install
- }
- deps_count = len(dependencies_to_install)
- deps_counter = 0
- transitive_deps_counter = 0
-
- extra_index_urls = {
- dep_info.extra_index_url
- for dep_info in dependencies_to_install
- if dep_info.extra_index_url
- }
-
- extra_index_args = []
- if len(extra_index_urls) > 0:
- extra_index_args.extend(["--extra-index-url", ",".join(extra_index_urls)])
-
- def get_progress_amount():
- transitive_progress = 1 - 1 / (2**transitive_deps_counter)
- progress = (deps_counter + transitive_progress) / (deps_count + 1)
- return min(max(0, progress), 1) * DEP_MAX_PROGRESS
-
- # Used to increment by a small amount between collect and download
- dep_small_incr = (DEP_MAX_PROGRESS / deps_count) / 2
-
- process = subprocess.Popen(
- [
- python_path,
- "-m",
- # TODO: Change this back to "pip" once pip updates with my changes
- "chainner_pip",
- "install",
- *[pin(dep_info) for dep_info in dependencies_to_install],
- "--disable-chainner_pip-version-check",
- "--no-warn-script-location",
- "--progress-bar=json",
- "--no-cache-dir",
- *extra_index_args,
- ],
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- encoding="utf-8",
- env=ENV,
- )
- installing_name = "Unknown"
- while True:
- nextline = process.stdout.readline() # type: ignore
- if process.poll() is not None:
- break
- line = nextline.strip()
- if not line:
- continue
-
- if logger is not None and not line.startswith("Progress:"):
- logger.info(line)
-
- # The Collecting step of pip. It tells us what package is being installed.
- if "Collecting" in line:
- match = COLLECTING_REGEX.search(line)
- if match:
- package_name = match.group(1)
- installing_name = dependency_name_map.get(package_name, None)
- if installing_name is None:
- installing_name = package_name
- transitive_deps_counter += 1
- else:
- deps_counter += 1
- await update_progress_cb(
- f"Collecting {installing_name}...", get_progress_amount(), None
- )
- # The Downloading step of pip. It tells us what package is currently being downloaded.
- # Later, we can use this to get the progress of the download.
- # For now, we just tell the user that it's happening.
- elif "Downloading" in line:
- await update_progress_cb(
- f"Downloading {installing_name}...",
- get_progress_amount() + dep_small_incr,
- None,
- )
- # We can parse this line to get the progress of the download, but only in our pip fork for now
- elif "Progress:" in line:
- json_line = line.replace("Progress:", "").strip()
- try:
- parsed = json.loads(json_line)
- current, total = parsed["current"], parsed["total"]
- if total is not None and total > 0:
- percent = current / total
- await update_progress_cb(
- f"Downloading {installing_name}...",
- get_progress_amount() + dep_small_incr,
- percent,
- )
- except Exception as e:
- if logger is not None:
- logger.error(str(e))
- # pass
- # The Installing step of pip. Installs happen for all the collected packages at once.
- # We can't get the progress of the installation, so we just tell the user that it's happening.
- elif "Installing collected packages" in line:
- await update_progress_cb("Installing collected dependencies...", 0.9, None)
-
- exit_code = process.wait()
- if exit_code != 0:
- raise ValueError("An error occurred while installing dependencies.")
-
- await update_progress_cb("Finished installing dependencies...", 1, None)
-
- for dep_info in dependencies_to_install:
- installed_packages[dep_info.package_name] = dep_info.version
-
- return len(dependencies_to_install)
+ return install_dependencies_sync(dependencies)
def uninstall_dependencies_sync(
@@ -249,8 +133,7 @@ def uninstall_dependencies_sync(
exit_code = subprocess.check_call(
[
- python_path,
- "-m",
+ "uv",
"pip",
"uninstall",
*[d.package_name for d in dependencies],
@@ -262,7 +145,7 @@ def uninstall_dependencies_sync(
raise ValueError("An error occurred while uninstalling dependencies.")
for dep_info in dependencies:
- installed_packages[dep_info.package_name] = dep_info.version
+ del installed_packages[dep_info.package_name]
async def uninstall_dependencies(
@@ -270,93 +153,11 @@ async def uninstall_dependencies(
update_progress_cb: UpdateProgressFn | None = None,
logger: Logger | None = None,
):
- # If there's no progress callback, just uninstall the dependencies synchronously
- if update_progress_cb is None:
- return uninstall_dependencies_sync(dependencies)
-
- if len(dependencies) == 0:
- return
-
- dependency_name_map = {
- dep_info.package_name: dep_info.display_name or dep_info.package_name
- for dep_info in dependencies
- }
- deps_count = len(dependencies)
- deps_counter = 0
- transitive_deps_counter = 0
-
- def get_progress_amount():
- transitive_progress = 1 - 1 / (2**transitive_deps_counter)
- progress = (deps_counter + transitive_progress) / (deps_count + 1)
- return min(max(0, progress), 1)
-
- # Used to increment by a small amount between collect and download
- dep_small_incr = (1 / deps_count) / 2
-
- process = subprocess.Popen(
- [
- python_path,
- "-m",
- # TODO: Change this back to "pip" once pip updates with my changes
- "chainner_pip",
- "uninstall",
- *[d.package_name for d in dependencies],
- "-y",
- ],
- stdout=subprocess.PIPE,
- stderr=subprocess.STDOUT,
- encoding="utf-8",
- env=ENV,
- )
- uninstalling_name = "Unknown"
- while True:
- nextline = process.stdout.readline() # type: ignore
- if process.poll() is not None:
- break
- line = nextline.strip()
- if not line:
- continue
-
- if logger is not None and not line.startswith("Progress:"):
- logger.info(line)
-
- # The Uninstalling step of pip. It tells us what package is being UNinstalled.
- if "Uninstalling" in line:
- match = UNINSTALLING_REGEX.search(line)
- if match:
- package_name = match.group(1)
- uninstalling_name = dependency_name_map.get(package_name, None)
- if uninstalling_name is None:
- uninstalling_name = package_name
- transitive_deps_counter += 1
- else:
- deps_counter += 1
- await update_progress_cb(
- f"Uninstalling {uninstalling_name}...", get_progress_amount(), None
- )
- # The Downloading step of pip. It tells us what package is currently being downloaded.
- # Later, we can use this to get the progress of the download.
- # For now, we just tell the user that it's happening.
- elif "Successfully uninstalled" in line:
- await update_progress_cb(
- f"Uninstalled {uninstalling_name}.",
- get_progress_amount() + dep_small_incr,
- None,
- )
-
- exit_code = process.wait()
- if exit_code != 0:
- raise ValueError("An error occurred while installing dependencies.")
-
- await update_progress_cb("Finished installing dependencies...", 1, None)
-
- for dep_info in dependencies:
- del installed_packages[dep_info.package_name]
+ return uninstall_dependencies_sync(dependencies)
__all__ = [
"DependencyInfo",
- "python_path",
"install_dependencies",
"install_dependencies_sync",
"installed_packages",
diff --git a/backend/src/packages/chaiNNer_pytorch/__init__.py b/backend/src/packages/chaiNNer_pytorch/__init__.py
index f93775ac..a5688630 100644
--- a/backend/src/packages/chaiNNer_pytorch/__init__.py
+++ b/backend/src/packages/chaiNNer_pytorch/__init__.py
@@ -6,6 +6,8 @@ from api import GB, KB, MB, Dependency, add_package
from gpu import nvidia
from system import is_arm_mac
+nvidia_is_available = True
+
general = "PyTorch uses .pth models to upscale images."
if is_arm_mac:
@@ -29,14 +31,14 @@ def get_pytorch():
Dependency(
display_name="PyTorch",
pypi_name="torch",
- version="2.1.2",
+ version="2.6.0",
size_estimate=55.8 * MB,
auto_update=False,
),
Dependency(
display_name="TorchVision",
pypi_name="torchvision",
- version="0.16.2",
+ version="0.21.0",
size_estimate=1.3 * MB,
auto_update=False,
),
@@ -46,11 +48,11 @@ def get_pytorch():
Dependency(
display_name="PyTorch",
pypi_name="torch",
- version="2.1.2+cu121" if nvidia.is_available else "2.1.2",
- size_estimate=2 * GB if nvidia.is_available else 140 * MB,
+ version="2.6.0+cu126" if nvidia_is_available else "2.6.0",
+ size_estimate=2 * GB if nvidia_is_available else 140 * MB,
extra_index_url=(
- "https://download.pytorch.org/whl/cu121"
- if nvidia.is_available
+ "https://download.pytorch.org/whl/cu126"
+ if nvidia_is_available
else "https://download.pytorch.org/whl/cpu"
),
auto_update=False,
@@ -58,11 +60,11 @@ def get_pytorch():
Dependency(
display_name="TorchVision",
pypi_name="torchvision",
- version="0.16.2+cu121" if nvidia.is_available else "0.16.2",
- size_estimate=2 * MB if nvidia.is_available else 800 * KB,
+ version="0.21.0+cu126" if nvidia_is_available else "0.21.0",
+ size_estimate=2 * MB if nvidia_is_available else 800 * KB,
extra_index_url=(
- "https://download.pytorch.org/whl/cu121"
- if nvidia.is_available
+ "https://download.pytorch.org/whl/cu126"
+ if nvidia_is_available
else "https://download.pytorch.org/whl/cpu"
),
auto_update=False,
@@ -103,7 +105,7 @@ package = add_package(
),
Dependency(
display_name="Spandrel extra architectures",
- pypi_name="spandrel_extra_arches",
+ pypi_name="spandrel-extra-arches",
version="0.2.0",
size_estimate=83 * KB,
),
diff --git a/backend/src/packages/chaiNNer_standard/__init__.py b/backend/src/packages/chaiNNer_standard/__init__.py
index bd9b0bc9..5eac5f09 100644
--- a/backend/src/packages/chaiNNer_standard/__init__.py
+++ b/backend/src/packages/chaiNNer_standard/__init__.py
@@ -16,7 +16,7 @@ package = add_package(
),
Dependency(
display_name="OpenCV",
- pypi_name="opencv-python",
+ pypi_name="opencv-python-headless",
version="4.8.0.76",
size_estimate=30 * MB,
import_name="cv2",
diff --git a/backend/src/server.py b/backend/src/server.py
index 6c5148e1..4786cea6 100644
--- a/backend/src/server.py
+++ b/backend/src/server.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import socket
import asyncio
import gc
import importlib
@@ -639,7 +640,8 @@ async def after_server_start(sanic_app: Sanic, loop: asyncio.AbstractEventLoop):
def main():
config = AppContext.get(app).config
- app.run(port=config.port, single_process=True)
+ sock = socket.create_server(("::", config.port), family=socket.AF_INET6, reuse_port=True, dualstack_ipv6=True)
+ app.run(sock=sock, single_process=True)
if exit_code != 0:
sys.exit(exit_code)
diff --git a/backend/src/server_host.py b/backend/src/server_host.py
index 4d9253b0..b4ce6597 100644
--- a/backend/src/server_host.py
+++ b/backend/src/server_host.py
@@ -1,5 +1,6 @@
from __future__ import annotations
+import socket
import asyncio
import logging
import sys
@@ -429,6 +430,7 @@ async def import_packages(
logger.error(f"Error installing dependencies: {ex}", exc_info=True)
if config.close_after_start:
raise ValueError("Error installing dependencies") from ex
+ raise ex
logger.info("Done checking dependencies...")
@@ -527,7 +529,8 @@ async def after_server_start(sanic_app: Sanic, loop: asyncio.AbstractEventLoop):
def main():
config = AppContext.get(app).config
- app.run(port=config.port, single_process=True)
+ sock = socket.create_server(("::", config.port), family=socket.AF_INET6, reuse_port=True, dualstack_ipv6=True)
+ app.run(sock=sock, single_process=True)
if __name__ == "__main__":
diff --git a/pyproject.toml b/pyproject.toml
deleted file mode 100644
index f785f712..00000000
--- a/pyproject.toml
+++ /dev/null
@@ -1,86 +0,0 @@
-[project]
-# Support Python 3.8+.
-requires-python = ">=3.8"
-
-[tool.ruff]
-# Same as Black.
-line-length = 88
-indent-width = 4
-
-src = ["backend/src"]
-
-# ignore vendored code
-extend-exclude = ["**/pytorch/architecture/**"]
-
-unsafe-fixes = true
-
-[tool.ruff.lint]
-# Add the `line-too-long` rule to the enforced rule set.
-extend-select = [
- "UP", # pyupgrade
- "E", # pycodestyle
- "W", # pycodestyle
- "F", # pyflakes
- "I", # isort
- "N", # pep8-naming
- # "ANN", # flake8-annotations
- "ANN001",
- "ANN002",
- # "ASYNC", # flake8-async
- "PL", # pylint
- "RUF", # ruff
- "B", # flake8-bugbear
- # "A", # flake8-builtins
- # "COM", # flake8-commas
- "C4", # flake8-comprehensions
- "FA", # flake8-future-annotations
- "ISC", # flake8-implicit-str-concat
- "ICN", # flake8-import-conventions
- "G", # flake8-logging-format
- # "INP", # flake8-implicit-namespaces
- "PIE", # flake8-pie
- # "PYI", # flake8-pyi
- "Q", # flake8-quotes
- # "RET", # flake8-return
- "SLF", # flake8-self
- # "SIM", # flake8-simplify
- # "TCH", # flake8-tidy-imports
- "NPY", # NumPy-specific rules
-]
-ignore = [
- "E501", # Line too long
- "PLR2004", # Magic value
- "PLR0911", # Too many return statements
- "PLR0912", # Too many branches
- "PLR0913", # Too many arguments
- "PLR0915", # Too many statements,
- "E741", # Ambiguous variable name,
- "E712", # true-false-comparison, has false positives because of numpy's operator overloading
- "F821", # Undefined name -- this one is weird, it seems like it has false positives on closures and other context changes
- "F403", # 'from module import *' used; unable to detect undefined names
- "PLW0603", # Using the global statement
- "N999", # Invalid module name (which triggers for chaiNNer)
- "N818", # Exception name should end in Error
- "ISC001", # Implicit string concatenation, conflicts with formatter
-]
-
-[tool.ruff.format]
-# Like Black, use double quotes for strings.
-quote-style = "double"
-
-# Like Black, indent with spaces, rather than tabs.
-indent-style = "space"
-
-# Like Black, respect magic trailing commas.
-skip-magic-trailing-comma = false
-
-# Like Black, automatically detect the appropriate line ending.
-line-ending = "auto"
-
-
-[tool.ruff.lint.pep8-naming]
-ignore-names = ["*Input", "*Output", "*Dropdown"]
-
-[tool.pytest.ini_options]
-filterwarnings = ["ignore::DeprecationWarning", "ignore::UserWarning"]
-pythonpath = ["backend/src"]
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment