Only difference in the commands are metadata (deployment name for graphing):
./run-bench.sh --model meta-llama/Llama-3.2-3B-Instruct \
--base_url http://llm-d-inference-gateway.llm-d.svc.cluster.local:80 \
--dataset-name random \
--input-len 1000 \
--output-len 500 \