-
-
Save psychemedia/51f45fbfe160f78605bdd0c1b404e499 to your computer and use it in GitHub Desktop.
from langchain.embeddings import LlamaCppEmbeddings does not work.
@psychemedia - Any ideas on where to seek out performance gain opportunities?
The final run on#129 is a killer. I was expecting VectorstoreIndexCreator to be something that helped improve performance but it still takes a long time.
@JeffreyShran For performance, I'd probably use a different way of semantically indexing items; the above demo was trying to keep things bounded by reusing the same model for everything.
ValueError: Requested tokens exceed context window of 512 using gpt4All with langchain llamacpp
@JeffreyShran For performance, I'd probably use a different way of semantically indexing items; the above demo was trying to keep things bounded by reusing the same model for everything.
Thanks @psychemedia, my longer term goal is to load in a git repo by switching the loader with the official git one with the hope that I could ask questions about my private projects.
RE: the error reported by @MakkiNeutron. I tried to load in a semi-small Python script as a text file as a preliminary test and also hit this error. Some digging showed that llama.cpp can be modified to increase the token amount. However I feel like this is a wrong step and in fact we should try to do a) as you suggested to my earlier question, use a different index approach and/or b) further split the text into smaller batches than the 500 in your code.
I'm keen but very much a noob in this space and so any insights that you might be able to share would be useful and appreciated, particularly on my expected use-case. Do you feel that I am taking the smartest route or should I use a different approach within langchain or some other tool?
@JeffreyShran Humm I just arrived here but talking about increasing the token amount that Llama can handle is something blurry still since it was trained from the beggining with that amount and technically you should need to recreate the whole training of Llama but increasing the input size. In other words, is a inherent property of the model that is unmutable from the beggining.
Good news is that the input training that Llama was trained on (therefore the maximum possible) is 2048 tokens!
Here you can see that limit on the HF docs looking at the max_position_embeddings
parameter
BTW here is a similar thread if you want to take a sneak peak
Nevertheless there are ways to let Llama have more "memory scope", here are some converstional approaches, the last section is the most interesting one for any purpose.
Hope you found it helpfull✌🏼
@JeffreyShran Humm I just arrived here but talking about increasing the token amount that Llama can handle is something blurry still since it was trained from the beggining with that amount and technically you should need to recreate the whole training of Llama but increasing the input size. In other words, is a inherent property of the model that is unmutable from the beggining. Good news is that the input training that Llama was trained on (therefore the maximum possible) is 2048 tokens!
Here you can see that limit on the HF docs looking at the
max_position_embeddings
parameterBTW here is a similar thread if you want to take a sneak peak
Nevertheless there are ways to let Llama have more "memory scope", here are some converstional approaches, the last section is the most interesting one for any purpose.
Hope you found it helpfull✌🏼
Thanks, that is helpful. However it appears that these settings are already maxed out at default to 2048.
The file I tested with had only a few lines in it, so I think the problem might lie elsewhere.
Thanks, that is helpful. However it appears that these settings are already maxed out at default to 2048.
The file I tested with had only a few lines in it, so I think the problem might lie elsewhere.
Yes, Indeed. I was hoping to find that limit on GPT4All but only found that the standard model used 1024 input tokens. So maybe... the quantized lora version uses a limit of 512 tokens for some reason, although it doens't make that much sense since quantized and lora versions only looses precision rather than dimensionality.
Anyway I think the best way to improve this regard is to try to use other models that we know can handle already 2048 token input. I suggest Vicuna, that was born mainly with this purpose of maxing out input/output.
If somebody can test this it would be so great.
I'm actually using ggml-vicuna-7b-4bit.bin. This is the one I'm having the most trouble with. :)
This would be much easier to follow with the working code in one place instead of only scattered fragments.
Is it possible to use GPT4All as llm with sql_agent or pandas_agent instead of OpenAI?
I've installed all the packages and still get this: zsh: command not found: pyllamacpp-convert-gpt4all
I've installed all the packages and still get this: zsh: command not found: pyllamacpp-convert-gpt4all
Try a older version pyllamacpp pip install pyllamacpp==1.0.7
.
I'm having trouble with the following code:
I install
pyllama
with the following command successfullyHowever when I run
I received an error message:
Has anyone else encountered same issue?
Update 1:
When I clone repository
pyllama
and run frompyllama
, I can download thellama
folder