Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the early 2000s Wikipedia used to fill that role. Now it's like you have an encyclopedia that you can talk to.

What I'm slightly worried about is that eventually they are going to want to monetize LLMs more and more, and it's not going to be good, because they have the ability to steer the conversation towards trying to get you to buy stuff.





> they are going to want to monetize LLMs more and more

Not only can you run reasonably intelligent models on recent relatively powerful PC's "for free", but advances are undoubtedly coming that will increase the efficient use of memory and CPU in these things- this is all still early-days

Also, some of those models are "uncensored"


Can you? I imagine e.g. Google is using material not available to the public to train their models (unsencored Google books, etc.). Also, the chat bots, like Gemini, are not just pure LLMs anymore, but they also utilize other tools as part of their computation. I've asked Gemini computationally heavy questions and it successfully invokes Python scripts to answer them. I imagine it can also use other tools than Python, some of which might not even be publicly known.

I'm not sure what the situation is currently, but I can easily see private data and private resources leading to much better AI tools, which can not be matched by open source solutions.


While they will always have premiere models that only run on data center hardware at first, the good news about the tooling is that tool calls are computationally very minimal and no problem to sandbox/run locally, at least in theory, we would still need to do the plumbing for it.

So I agree that open source solutions will likely lag behind, but that's fine. Gemini 2.5 wasn't unusable when Gemini 3 didn't exist, etc.


How do you verify the models you download also aren't trying to get you to buy stuff?

I guess you.. ask them a bunch of recommendations? I would imagine this would not be incredibly hard to test as a community

Before November 30, 2022 that would have worked, but I think it stopped being reliable sometime between the original ChatGPT and today.

As per dead internet theory, how confident are we that the community which tells us which LLM is safe or unsafe is itself made of real people, and not mostly astroturfing by the owners of LLMs which are biased to promote things for money?

Even DIY testing isn't necessarily enough, deceptive alignment has been shown to be possible as a proof-of-concept for research purposes, and one example of this is date-based: show "good" behaviour before some date, perform some other behaviour after that date.


Proudly bought to you by Slurm

One of the approaches to this that I haven't seen being talked about on HN at all is LLM as public infrastructure by the government. I think EU can pull this off. This also addresses overall alignment and compute-poverty issue. I wouldn't mind if my taxes paid for that instead of a ChatGPT subscription.

This is not a good idea at all.

Government should not be in a position to directly and pervasively shape people’s understanding of the world.

That would be the infinite regress opposite of a free (in a completely different sense) press.

A non-profit providing an open data and training regime for an open WikiBrain would be nice. With standard pricing for scaled up use.


Instead, we should let capitalism consolidate all power in the hands of the few, and let them directly and pervasively shape people's understanding of the world.

How would a non profit even be funded? That would just be government with extra steps.

No, capitalism giveth the LLMs and capitlism taketh the sales.


I've heard once that Americans distrust their government and trust their corporations, while Europeans distrust their corporations and trust their government. I honestly think that governments already has a huge role in shaping people's understanding of the world, and that's GREAT on good democratic countries.

What I find really weird is that I am stopping believing in the whole idea of free press, considering how the mainstream media is being bought by oligarchs around the globe. I think this is a good example of the erosion of trust in institutions in general. This won't end well.

Your idea of letting it be run by a non-profit makes me believe that you also don't trust institutions anymore.


> Government should not be in a position to directly and pervasively shape people’s understanding of the world.

You disagree with national curricula, state broadcasters, publicly funded research and public information campaigns?


Many Americans these days absolutely do disagree with all of those things. Educated ones. Theres simply a short circuit belief based pathway in peoples brains that bypasses everything rational on arbitrary topics.

Most of us used to see it as isolated to religion or niche political points, but increasingly everything os being swept into the "its political" camp.


this assumes that "the government" is "us" and not "them"...

or more generally than just ads: make you believe stuff that makes you act in ways that is detrimental to you, but benefitial to them (whoever sits in the center and can control and shape the LLM).

i.e. the Nudge Unit on steroids...

care must be taken to avoid that.


Right, this is what happened with search engines. And "SEO for LLMs" is already a thing.

It's also inevitable that better and better open source models will be distilled as frontier models advance.

I agree. I think the local models you can run on the "average computer" are not quite good enough yet, but I have hope that we will see much better small local models in the future.

Enshittification is always inevitable in a capitalist world, but not always easy to predict how it will happen.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: