Post
347
π
The fastest way to install and run
We are expanding hardware and OS support to make local AI even more accessible. This includes:
π Vulkan support for Linux on
π FreeBSD support (CPU backend) on
β¨ Lots of small optimizations and improvements under the hood.
Give it a try right now:
installama.sh update: Vulkan & FreeBSD support added!The fastest way to install and run
llama.cpp has just been updated!We are expanding hardware and OS support to make local AI even more accessible. This includes:
π Vulkan support for Linux on
x86_64 and aarch64.π FreeBSD support (CPU backend) on
x86_64 and aarch64 too.β¨ Lots of small optimizations and improvements under the hood.
Give it a try right now:
curl angt.github.io/installama.sh | MODEL=unsloth/Qwen3-4B-GGUF:Q4_0 sh