LLM models used as an external resource offer multiple disadvantages:
- Privacy - guarantees regarding confidential imformation usage by the models and intellectual property (IP) safety cannot be guaranteed
- Flexibility - specialization and fine-tuning of more general models offers efficiency gains in terms of resources used and outputs achieved
- Stability - reducing vendor dependency allows internal system stability through guaranteed availability
An approach, whereby LLM models and deployed, fine-tuned and utilized locally (on-premise) allows: