Skip to content

Instantly share code, notes, and snippets.

Show Gist options
  • Save jjn1056/ef94a37c3841492d620d8a4184ba7e77 to your computer and use it in GitHub Desktop.
Save jjn1056/ef94a37c3841492d620d8a4184ba7e77 to your computer and use it in GitHub Desktop.
[info] Application powered by Catalyst 5.90130
main: seed = 1683661395
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
llama_model_load: ggml ctx size = 6065.34 MB
llama_model_load: memory_size = 2048.00 MB, n_mem = 65536
llama_model_load: loading model part 1/1 from 'ggml-alpaca-7b-q4.bin'
llama_model_load: .................................... done
llama_model_load: model size = 4017.27 MB / num tensors = 291
system_info: n_threads = 4 / 16 | AVX = 1 | AVX2 = 1 | AVX512 = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
main: interactive mode on.
sampling parameters: temp = 0.100000, top_k = 40, top_p = 0.950000, repeat_last_n = 64, repeat_penalty = 1.300000
== Running in chat mode. ==
- Press Ctrl+C to interject at any time.
- Press Return to return control to LLaMA.
- If you want to submit another line, end your input in '\'.
In on_read
The process wrote a line: ''
why not?
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
In on_read
The process wrote a line: '> Because it's impossible to know what will happen in the future, so why try and predict something when you can just let things unfold naturally instead?'
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment