Created
April 24, 2024 08:16
-
-
Save whatvn/3c9571325dd4cb5bc6b10f344692e031 to your computer and use it in GitHub Desktop.
The CUDA compiler identification is unknown
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Collecting llama-cpp-python | |
Using cached llama_cpp_python-0.2.64.tar.gz (37.4 MB) | |
Installing build dependencies ... done | |
Getting requirements to build wheel ... done | |
Installing backend dependencies ... done | |
Preparing metadata (pyproject.toml) ... done | |
Collecting jinja2>=2.11.3 | |
Using cached Jinja2-3.1.3-py3-none-any.whl (133 kB) | |
Collecting diskcache>=5.6.1 | |
Using cached diskcache-5.6.3-py3-none-any.whl (45 kB) | |
Collecting typing-extensions>=4.5.0 | |
Using cached typing_extensions-4.11.0-py3-none-any.whl (34 kB) | |
Collecting numpy>=1.20.0 | |
Using cached numpy-1.26.4-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB) | |
Collecting MarkupSafe>=2.0 | |
Using cached MarkupSafe-2.1.5-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (25 kB) | |
Building wheels for collected packages: llama-cpp-python | |
Building wheel for llama-cpp-python (pyproject.toml) ... error | |
error: subprocess-exited-with-error | |
× Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. | |
│ exit code: 1 | |
╰─> [38 lines of output] | |
*** scikit-build-core 0.9.2 using CMake 3.29.2 (wheel) | |
*** Configuring CMake... | |
loading initial cache file /tmp/tmpvc9le0vd/build/CMakeInit.txt | |
-- The C compiler identification is GNU 9.4.0 | |
-- The CXX compiler identification is GNU 9.4.0 | |
-- Detecting C compiler ABI info | |
-- Detecting C compiler ABI info - done | |
-- Check for working C compiler: /usr/bin/cc - skipped | |
-- Detecting C compile features | |
-- Detecting C compile features - done | |
-- Detecting CXX compiler ABI info | |
-- Detecting CXX compiler ABI info - done | |
-- Check for working CXX compiler: /usr/bin/c++ - skipped | |
-- Detecting CXX compile features | |
-- Detecting CXX compile features - done | |
-- Found Git: /usr/bin/git (found version "2.25.1") | |
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD | |
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed | |
-- Check if compiler accepts -pthread | |
-- Check if compiler accepts -pthread - yes | |
-- Found Threads: TRUE | |
-- Found CUDAToolkit: /usr/local/cuda/targets/x86_64-linux/include (found version "11.8.89") | |
-- CUDA found | |
-- The CUDA compiler identification is unknown | |
CMake Error at /tmp/pip-build-env-r6dnttja/normal/lib/python3.10/site-packages/cmake/data/share/cmake-3.29/Modules/CMakeDetermineCUDACompiler.cmake:266 (message): | |
Failed to detect a default CUDA architecture. | |
Compiler output: | |
Call Stack (most recent call first): | |
vendor/llama.cpp/CMakeLists.txt:402 (enable_language) | |
-- Configuring incomplete, errors occurred! | |
*** CMake configuration failed | |
[end of output] | |
note: This error originates from a subprocess, and is likely not a problem with pip. | |
ERROR: Failed building wheel for llama-cpp-python | |
Failed to build llama-cpp-python | |
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
CUDACXX=/usr/local/cuda/bin/nvcc CMAKE_ARGS="-DLLAMA_CUDA=on -DCMAKE_CUDA_ARCHITECTURES=native" FORCE_CMAKE=1 pip3.10 install llama-cpp-python | |
Collecting llama-cpp-python |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment