Skip to content

Instantly share code, notes, and snippets.

@ravi9
Created April 4, 2018 17:16
Show Gist options
  • Save ravi9/cf62c27d852a261981441b37bf3d2e4f to your computer and use it in GitHub Desktop.
Save ravi9/cf62c27d852a261981441b37bf3d2e4f to your computer and use it in GitHub Desktop.
Tensorflow NMT inference varying batchsize
#!/bin/bash
#nmt_infer_batchsize.sh
###
#Prereq:
#sudo apt install -y moreutils jq
###
echo -e "\nBZ WPS \n"
#set -x
for bz in 1024 1 32 64 128 256 512 1024; do
jq ".infer_batch_size = $bz" /home/ubuntu/nmt_model/hparams|sponge /home/ubuntu/nmt_model/hparams
python -m nmt.nmt --out_dir=/home/ubuntu/nmt_model \
--inference_input_file=/home/ubuntu/my_infer_file.vi \
--inference_output_file=/home/ubuntu/nmt_model/output_infer \
2>&1 | grep "done, num sentences" | cut -d "," -f4 | cut -d " " -f3
#echo "$bz $wps"
done
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment