Skip to content

Instantly share code, notes, and snippets.

@warmonkey
Last active June 5, 2025 08:56
Show Gist options
  • Save warmonkey/5291b58f8ac0b47553171e5eb72b4219 to your computer and use it in GitHub Desktop.
Save warmonkey/5291b58f8ac0b47553171e5eb72b4219 to your computer and use it in GitHub Desktop.
compile llama.cpp on windows with AMD ROCm
'BROKEN, DONT TRY
D:\
set PATH=%HIP_PATH%\bin;%PATH%
cd llama.cpp
git clone https://github.com/microsoft/vcpkg.git
cd vcpkg
bootstrap-vcpkg.bat
vcpkg.exe install curl:x64-windows
cd ..
mkdir build
cd build
cmake .. -G Ninja -DAMDGPU_TARGETS=gfx1103 -DGGML_HIP=ON -DGGML_OPENMP=OFF ^
-DCMAKE_C_COMPILER=clang -DCMAKE_CXX_COMPILER=clang++ ^
-DLLAMA_CURL=ON ^
-DCURL_LIBRARY=D:\llama.cpp\vcpkg\packages\curl_x64-windows\lib\libcurl.lib ^
-DCURL_INCLUDE_DIR=D:\llama.cpp\vcpkg\packages\curl_x64-windows\include
cmake --build . --config Release
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment