Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
rgbrgb
on Nov 29, 2023
|
parent
|
context
|
favorite
| on:
Llamafile lets you distribute and run LLMs with a ...
In my experience, if you're on a mac it's about the file size * 150% of RAM to get it working well. I had a user report running my llama.cpp app on a 2017 iMac with 8GB at ~5 tokens/second. Not sure about other platforms.
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: