Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
novaomnidev
on Nov 30, 2023
|
parent
|
context
|
favorite
| on:
Llamafile lets you distribute and run LLMs with a ...
Why is this faster than running llama.cpp main directly? I’m getting 7 tokens/ sec with this. But 2 with llama.cpp by itself
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: