Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't see why you cannot use a container for LLMs, that's how we've shipping and deploying runnable models for years


Being able to run a LLM without first installing and setting up Docker or similar feels like a big win to me.

Is there an easy way to run a Docker container on macOS such that it can access the GPU?


Not sure, I use cloud VMs for ML stuff

We definitely prefer to use the same tech stack for dev and production, we already have docker (mostly migrated to nerdctl actually)

Can this project do production deploys to the cloud? Is it worth adding more tech to the stack for this use-case? I often wonder how much devops gets reimplemented in more specialized fields




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: