Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree that LLMs can be useful companions for thought when used correctly. I don’t agree that LLMs are good at “supplying clean verbal form” of vaguely expressed, half-formed ideas and that this results in clearer thinking.

Most of the time, the LLM’s framing of my idea is more generic and superficial than what I was actually getting at. It looks good, but when you look closer it often misses the point, on some level.

There is a real danger, to the extent you allow yourself to accept the LLM’s version of your idea, that you will lose the originality and uniqueness that made the idea interesting in the first place.

I think the struggle to frame a complex idea and the frustration that you feel when the right framing eludes you, is actually where most of the value is, and the LLM cheat code to skip past this pain is not really a good thing.





I often discuss ideas with peers that I trust to be strong critical thinkers. Putting the idea through their filters of scrutiny quickly exposes vulnerabilities that I'd have to patch on the spot, sometimes revealing weaknesses resulting from bad assumptions.

I started to use LLMs in a similar fashion. It is a different experience. Where a human would deconstruct you for fun, the LLM tries to engage positively by default. Once you tell it to say it the way it is, you get the "honestly, this may fail and here's why".

To my assessment, an LLM is better than being alone in a task and that is the value proposition.


LLM’s are tools. A huge differentiator between professionals in any profession is how well they know their tools.

But one of the first things to understand about power tools is to know all the ways in which they can kill you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: