This document explains how to do LoRAs hot-swapping via the llama.cpp Python wrapper, llama-cpp-python.
This provides extra docs for my pull request: abetlen/llama-cpp-python#1817
For now, the main llama-cpp-python main branch doesn't have LoRA hot-swapping support merged, so this guide uses the code for the branch to test out LoRA support.