-
-
Save mberman84/9b3c281ae5e3e92b7e946f6a09787cde to your computer and use it in GitHub Desktop.
# Clone the repo | |
git clone https://github.com/imartinez/privateGPT | |
cd privateGPT | |
# Install Python 3.11 | |
pyenv install 3.11 | |
pyenv local 3.11 | |
# Install dependencies | |
poetry install --with ui,local | |
# Download Embedding and LLM models | |
poetry run python scripts/setup | |
# (Optional) For Mac with Metal GPU, enable it. Check Installation and Settings section | |
to know how to enable GPU on other platforms | |
CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-python | |
# Run the local server | |
PGPT_PROFILES=local make run | |
# Note: on Mac with Metal you should see a ggml_metal_add_buffer log, stating GPU is | |
being used | |
# Navigate to the UI and try it out! | |
http://localhost:8001/ |
to get the lastest don't you have to clone from here: https://github.com/zylon-ai/private-gpt.git
When I run poetry install --with ui,local, see the following errors:
Group(s) not found: local (via --with), ui (via --with)
I would appreciate your help in resolving this issue.
@das-wu I found the solution in this question https://stackoverflow.com/questions/78149911/poetry-poetry-install-with-ui-error-groups-not-found-ui-via-with
Is there any method i can use to improve the speed of ingesting files? it is taking more than 10 minutes on a 2 MB PDF and i have a Ryzen 7 5800H
I had to install pipx
, poetry
, chocolatey
in my windows 11.
Apart from that poetry install --with ui,local
(this command didn't work out for me) all the commands worked.
Instead this command worked for me poetry install --extras "ui llms-llama-cpp vector-stores-qdrant embeddings-huggingface"
I'm doing it under WSL, follow this guide and you'll have a reasonable starting base
Did you use the same commands he ran in mac?
Is there a good tutorial for ubuntu out there? Just curious
Is there a good tutorial for ubuntu out there? Just curious
I do not have an answer to your question, but having successfully done the windows version I wonder if you can’t just follow the Linux version of that. The privategpt instructions have gotten better for windows and Linux is embedded in that. Might look at that, I know I am going to do that. As I have a container use case and Linux is a better fit for that.
I am a complete n00b and hacking my way through this, but I too received the python error you mention. In order to fix this I ran
conda install Python=3.11
after activating my environment.
I am finding that the toml file is not correct for poetry 1.2 and above because it’s using the old format for the ui variable. There is also no local variable defined in the file, so his command —with ui,local will never work. I updated the toml to use the 1.2+ format but then ran into another issue referencing the object “list”.
Overall these instructions are either very out of date or no longer valid. Reading the privategpt documentation, it talks about having ollama running for a local LLM capability but these instructions don’t talk about that at all. I’m very confused.