mirror of
https://github.com/ggerganov/llama.cpp.git
synced 2026-02-05 13:53:23 +02:00
* llama: use max. GPU layers by default, auto -fa * ggml-backend: abort instead of segfault
13 KiB
Executable File
13 KiB
Executable File