Created
April 8, 2023 16:58
-
-
Save jooray/bfbda7a31f8261119fb38a26a2e5dd6d to your computer and use it in GitHub Desktop.
A modified chat-13B.sh script from llama.cpp to change prompt to the style vicuna model was trained.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
#!/bin/bash | |
cd "$(dirname "$0")/.." || exit | |
MODEL="${MODEL:-./models/13B-vicuna/ggml-vicuna-13b-4bit-rev1.bin}" | |
USER_NAME="${USER_NAME:-Human}" | |
AI_NAME="${AI_NAME:-Assistant}" | |
# Adjust to the number of CPU cores you want to use. | |
N_THREAD="${N_THREAD:-8}" | |
# Number of tokens to predict (made it larger than default because we want a long interaction) | |
N_PREDICTS="${N_PREDICTS:-2048}" | |
# Note: you can also override the generation options by specifying them on the command line: | |
# For example, override the context size by doing: ./chatLLaMa --ctx_size 1024 | |
GEN_OPTIONS="${GEN_OPTIONS:---ctx_size 2048 --temp 0.7 --top_k 40 --top_p 0.5 --repeat_last_n 256 --batch_size 1024 --repeat_penalty 1.17647}" | |
# shellcheck disable=SC2086 # Intended splitting of GEN_OPTIONS | |
./main $GEN_OPTIONS \ | |
--model "$MODEL" \ | |
--threads "$N_THREAD" \ | |
--n_predict "$N_PREDICTS" \ | |
--color --interactive \ | |
--reverse-prompt "${USER_NAME}:" \ | |
--prompt " | |
A chat between a curious human and an artificial intelligence assistant. | |
The assistant gives helpful, detailed, and polite answers to the human's questions. | |
### $USER_NAME: Hello, $AI_NAME! | |
### $AI_NAME: Hello $USER_NAME! How may I help you today? | |
### $USER_NAME:" "$@" |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment