llama.cpp官方编译发布的ubuntu版本只支持cpu和vulkan版本,如需原生ROCm加速,需要自行编译。
这提供了对支持HIP的AMD GPU的GPU加速。确保已安装ROCm。您可以从Linux发行版的包管理器或此处下载:ROCm快速入门(Linux)。
# 安装必要的依赖
sudo apt install cmake git libcurl4-openssl-dev
# 克隆代码
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp
# 执行编译
HIPCXX="$(hipconfig -l)/clang" HIP_PATH="$(hipconfig -R)" cmake -S . -B build -DGGML_HIP=ON -DAMDGPU_TARGETS=gfx906 -DCMAKE_BUILD_TYPE=Release && cmake --build build --config Release -- -j 28
如果编译成功,在可以在llama.cpp/build/bin 中查看到全部的cli和so
# 测试命令
./build/bin/llama-cli
# 打印
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 ROCm devices:
Device 0: AMD Radeon Graphics, gfx906:sramecc-:xnack- (0x906), VMM: no, Wave Size: 64
build: 0 (unknown) with cc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0 for x86_64-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_model_load_from_file_impl: using device ROCm0 (AMD Radeon Graphics) (0000:05:00.0) - 32732 MiB free
gguf_init_from_file: failed to open GGUF file 'models/7B/ggml-model-f16.gguf'
llama_model_load: error loading model: llama_model_loader: failed to load model from models/7B/ggml-model-f16.gguf
llama_model_load_from_file_impl: failed to load model
common_init_from_params: failed to load model 'models/7B/ggml-model-f16.gguf', try reducing --n-gpu-layers if you're running out of VRAM
main: error: unable to load model
可以看到已经加载了ROCm后端,表示成功。
原创文章,转载请注明: 转载自诺德美地科技