I previously wrote about TorchServe as nice way to serve models.
There are a plethora of ways to optimize LLM models for inference, such as quantization, Paged Attention, kernel fusion, and other compilation techniques. The list of these keeps expanding as the demand for serving OSS LLMs increases. There is a groundswell of demand for serving OSS LLMs, because many organizations have tried OpenAI/Anthropic/Palm, but want a a solution they can control
This is why TorchServe is nice, you can use any framework you want in the custom handler, such as GPTQ, transformers, vLLM, etc (as long as there is a python client). With TorchServe, there is enough f