Skip to content

Instantly share code, notes, and snippets.

@donglixp
Last active December 17, 2019 11:44
Show Gist options
  • Save donglixp/34926e7af7b6e2902488881a85cee0cc to your computer and use it in GitHub Desktop.
Save donglixp/34926e7af7b6e2902488881a85cee0cc to your computer and use it in GitHub Desktop.
question generation

docker

alias=`whoami | cut -d'.' -f2`; sudo docker run -it --rm --runtime=nvidia --ipc=host --privileged -v /home/${alias}:/home/${alias} -v /mnt/data:/mnt/data pytorch/pytorch:1.1.0-cuda10.0-cudnn7.5-devel bash

docker setup

apt-get update
apt-get install -y vim wget ssh

PWD_DIR=$(pwd)
cd $(mktemp -d)
git clone -q https://github.com/NVIDIA/apex.git
cd apex
pip install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
cd $PWD_DIR

pip install --user tensorboardX six numpy tqdm path.py methodtools py-rouge pyrouge nltk
python -c "import nltk; nltk.download('punkt')"
pip install -e git://github.com/Maluuba/nlg-eval.git#egg=nlg-eval

cd ~/code
git clone https://github.com/donglixp/transformers

cd ~/code/transformers/
pip install --editable .

training

Data can be downloaded form here.

export PYTORCH_PRETRAINED_BERT_CACHE=/mnt/data/bert-cased-pretrained-cache/
export QG_DIR=/mnt/unilm/hanbao/exp/lbert/qg
python -m torch.distributed.launch --nproc_per_node=8 examples/run_seq2seq.py \
--do_train --fp16 --fp16_opt_level O2 --num_workers 0 --model_type unilm --model_name_or_path unilm-large-cased \
--tokenized_input --data_dir $QG_DIR/train --src_file train.pa.nqg.txt --tgt_file train.q.nqg.txt \
--output_dir $QG_DIR/models/output_e10m7_qw_99_l2_ftest_tfm \
--max_seq_length 512 --max_position_embeddings 512 --mask_prob 0.7 \
--max_pred 48 --train_batch_size 32 --gradient_accumulation_steps 2 \
--learning_rate 0.00002 --warmup_proportion 0.1 --num_train_epochs 10 --label_smoothing 0.1

inference

export PYTORCH_PRETRAINED_BERT_CACHE=/mnt/data/bert-cased-pretrained-cache/
export QG_DIR=/mnt/unilm/hanbao/exp/lbert/qg
export QG_MODEL_DIR=/mnt/unilm/hanbao/exp/lbert/qg/models/output_e10m7_qw_99_l2_ftest_tfm
python examples/decode_seq2seq.py --model_type unilm --model_name_or_path unilm-large-cased \
--input_file $QG_DIR/test/test.pa.nqg.txt --split test --tokenized_input \
--model_recover_path $QG_MODEL_DIR/model.10.bin --max_seq_length 512 --max_tgt_length 48 --batch_size 16 --beam_size 1 --length_penalty 0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment