Hacker Newsnew | past | comments | ask | show | jobs | submit | Palmik's commentslogin

Does Microsoft let you encrypt the key with your password / passphrase (with a backup you can write down)?

Technically it is possible to configure butlocker using passphrase instead of a TPM. It is not easy though. It is configured via GPO. However it is not a local account password. It is a separate passphrase which you need to provide early in boot process, similar to LUKS on linux systems. It works on windows computers without TPM, i’m not sure is it supported on systems that actually have TPM available.

To be clear, GLM 4.7 Flash is MoE with 30B total params but <4B active params. While Devstral Small is 24B dense (all params active, all the time). GLM 4.7 Flash is much much cheaper, inference wise.

You can find active discussions here: https://news.ycombinator.com/active

(Including ones one flagged submissions)


Okay, but why isn't it visible in /news

This post is ranked 7th in /active, now. Quickly cross-checking /active and /news, I've found no other post in /active not visible in /news. It went from 100 to 200 points, since I noticed the delisting. /active is an obscure list, I doubt, that's how many people find this post.

Whatever HN is doing, it seems to be completely intransparent and selective. Some A/B-ing, or geofencing. In any case, questionable and manipulative. Like they are trying to hide interference and engineer popularity/engagement to whatever end.

And you have to wonder, if this has anything to do with the fact this particular political move seems to have greatly backfired on every possible axis, apparently even within the conservative and MAGA base. May turn out as exceptionally stupid, especially before midterms. I've seen impeachment calls in /r/conservative (lol), and they are usually an extension of Trumps digestive system. Diplomacy with Europe is basically dead, France wants to trigger the EU's extortion clause and it's a sunday.

Maaaybe there is active damage control going on.


HN moderation routinely demotes politically charged threads so that they don’t show at the top of the default front page all the time.

It's not demoted, for all I can tell, it's gone. In any case, pretty shady to do this covertly.

If it’s “gone” then it’s because too many users flagged it. You can turn on “showdead” in your profile to see them again. It isn’t done covertly. You can email hn@ycombinator.com about specific posts to get an explanation.

Have that option set. It's not marked flagged, or dead.

See, the weird thing is how quite many people found their way here after it got delisted.


So it’s not actually gone? Again, instead of speculating, send an email if something is unclear. Yes, moderation is purposefully selective, but not based on political agenda. Dang has repeatedly explained moderation policy in the past.

This blog post has some information: https://drewdevault.com/2017/09/13/Analyzing-HN.html


> So it’s not actually gone?

It is? Dude, just check yourself, instead of sealioning?!

It's in /active, not frontpage or 15 pages in as stated above. It's not marked anything, which would also show next to the title of the post itself. So what's your fucking offense? If all of this is of no concern to you, why bother commenting? Yeah, thanks for pointing out I can write mails somewhere. I should also write my representative and call the embassy. And sorry, I haven't read every thread ever to know what Dang said at some point in the past. Well, what did he say about opaque visibility manipulation? How about leaving a message in respective threads?

I was just pointing out there is intransparent, weird censorship for this post. I don't care as much about the alleged reasons. People should be aware this is a covertly distorted discussion.


What are you talking about? Google came out in 1998 and introduced ads in search results in 2000

Not at all. They train on your prompts and codebase unless you opt out.

I wish there were more open benchmarks comparing different setups and different engines. There are so many knobs to tune (TP / DP / PP / PD / spec. decoding / etc.) and while the optimal setup will be highly dependent on the model, the environment and the traffic, it's likely some useful conclusions could be drawn.

It almost feels like in the past year there is some unwritten agreement between the 3 main open-source engines (vLLM, sglang, TRT-LLM) to not compare to each other directly :) They used to publish benchmarks comparing against each other quite regularly.


Great work! What optimizations are you most excited about for 2026?

Lot of cool stuff coming up! As a Ray developer, I focus more on the orchestration layer, so I'm excited about things like Elastic Expert Parallelism, posttraining enhancements like colocated trainer/engines, and deploying DSV4 (rumors are the architecture will be complex). vLLM roadmap is here for reference: http://roadmap.vllm.ai/

APIs are usually very profitable. As for subscriptions, it would depend on how many tokens average subscriber uses per month. Do we have some source of info on this?

Some notes:

- # Input tokens & # output tokens per request matters a lot.

- KV Cache hit rate matters a lot.

- vLLM is not the necessarily most efficient engine.

- You are looking at API cost for DeepSeek V3.2, which is much cheaper than DeepSeek R1 / V3 / V3.1. DeepSeek V3.2 is different architecture (sparse attention) that is much more efficient. DeepSeek V3 cheapest option (fp8) tends to be ~$1/mil output tokens while R1 tends to be ~$2.5/mil (note that for example Together AI charges whopping $7/mil output tokens for R1!)

As for the cost: You can also get H200s for ~ $1.6/hr and H100s for ~ $1.2/hr. That somewhat simplifies the calculations :)

Ignoring the caveats and assuming H200s, with their setup you will:

- Process 403200000 input tokens.

- Generate 126720000 output tokens.

- Spend $25.6.

- On Together with DS R1 it would cost you $3 * 403.2 + $7 * 126.7 = ~$2096. Together does not even offer discount for KV cache hits (what a joke :)).

- On NovitaAI with DS R1 it would cost you $0.7 * 403.2 + $2.5 * 126.7 = ~$600 (with perfect cache hit rate, which gives 50% discount on input tokens here, it would be ~$458).


Nothing wrong with a GPL-like viral license for the AI era.

Training on my code / media / other data? No worries, just make sure the weights and other derived artifacts are released under similarly permissive license.


Well, I would say it should be like that already & no new license is needed. Basically if a LLM was ever based on GPL code, its output should be also GPL licensed. As simple as that.

Define source, compile, and library.

Licenses like GPL are built on top of an enforcement mechanism like copyright. Without an enforced legal framework preventing usage unless a license is agreed to, a license is just a polite request.

We need countries to start legally enforce that. Nothing will change otherwise. I stopped open sourcing my code and LLMs are one of the big reason.

Wouldn't you want the code generated by those models be released under those permissive licenses as well? Is that what you mean by other derived artifacts?

That's how I interpreted it at least

It really should be like that indeed. Where is RMS? Is he working on GPLv4?

If model training is determined to be fair use under US copyright law—either legislated by Congress or interpreted by Federal courts—then no license text can remove the right to use source code that way.

> then no license text can remove the right to use source code that way.

At least in the US.

Quite what happens if another country ordered, say chatGPT, to be released under the AGPL since it was trained on AGPL code, who knows.


RMS is probably greatly behind the technical news at this point. I mean, he's surfing the web via a email summary of some websites. Even if he doesn't condone of how the internet is evolving, he can't really keep up with technology if he doesn't "mingle".

He's also 72, we can't expect him to save everyone. We need new generations of FOSS tech leaders.


I am gen-z and I am part of the foss community (I think) and one of the issues about new generations of FOSS tech leaders is that even if one tries to do so.

Something about Richard stallman really is out of this world where he made people care about Open source in the first place.

I genuinely don't know how people can relicate it. I had even tried and gone through such phase once but the comments weren't really helpful back then on hackernews

https://news.ycombinator.com/item?id=45558430 (Ask HN: Why are most people not interested in FOSS/OSS and can we change that)


As much as RMS meant for the world, he’s also a pretty petty person. He’s about freedom but mostly about user freedom, not creators freedom. I also went through such a phase but using words like “evil” is just too black and white. I don’t think he is a nice person to be around.l, judging from some podcasts and videos.

If there is one thing Stallman knows well is the way he uses words and I can assure you if he calls something "evil" that is exactly the word he meant to use.

> user freedom, not creators freedom

In his view users are the creators and creators are the users. The only freedom he asks you to give up is the freedom to limit the freedom of others.


RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.

I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.

Again, imho the GPL is a blessing for humanity, and bless the people that choose it freely.


> RMS asks you to give something up: Your right to share a thing you made, under your conditions (which may be conditions even the receiving party agree on), nobody is forced in this situation, and then he calls that evil. I think that is wrong.

This is not true, though. As a copyright holder, you are allowed to license your work however you wish, even if it's under for example GPL-3.0-or-later or whatever. You can license your code outside of the terms of the GPL to a particular user or group of users for example for payment.

Really, it's only when the user agrees to abide by the license that you'd have to give access to source code when asked, for example.

> I love FOSS, don't get me wrong. But people should be able to say: I made this, if you want to use it, it's under these condition or I won't share it.

And they can. Whether that wins one any friends or not is another matter.


Oh and bless the people that won't use anything but GPL software.

Don't bless the people that think you are evil for not applying the GPL to your creation.


> user freedom, not creators freedom

Creators are not creators, they're also users. There's a very solid chance that a better world for everyone would be achieved if freedoms for all users would be bullet proof. Every user should be able to modify and repair all their hardware and software without creator involvement.


And we just don't think about all the software that is then not being created because people feel it's immediately everyone's property and so won't even bother?

Sure, we can copy software, so it's not like they are taking your house. But "they" may be taking your livelihood.

Ok, objectively perhaps the world would be better, but we can't know. And opinions don't mean anything. What matters is individuals and being fair to them, whatever society grows from that is just what we have.

That said, if we ever go multi-planet, and there is a planet with no copyright and everything is GPL, I'd check it out and imagine I'd feel quite at home there.


> Sure, we can copy software, so it's not like they are taking your house. But "they" may be taking your livelihood.

With GenAI that's starting to happen with anyway.


Which is why we perhaps need a GPLv4? With some provisions that force open sourcing model architecture + weights when using such code as training material?

And also provisions somehow handling hyper scalers. Hyper scalers are big enough that they can build everything from scratch and stop ripping off FOSS individual and small company contributors.

You can follow him on https://stallman.org/ What is he doing? I believe still giving talks and taking stance on current day political issues. Additionally I believe the last few years where quite turbulent so I assume he is taking life at his own pace.

Interesting. Is there a license that acts this already ?

That is a complete fools errand. If it ever passes it would just mean the death of Open Source AI models. All the big companies would just continue to collect whatever data they like, license it if necessary or pay the fine if illegal (see Antropic paying $1.5 billion for books). While every Open Source model would be starved for training data within its self enforced rules and easy to be shut down if ever a incorrectly licenses bit slips into the models.

The only way forward is the abolishment of copyright.


I don't follow. If the model was open-sourced under this GPL-like license (or a compatible license), then it would follow the GPL-like license. If the model was closed, it would violate the license. In other words, it would not affect open-source models at all.

Similarly, I could imagine carving out an exception when training on copyrighted material without licence, as long as the resulting model is open-sourced.


> If the model was closed, it would violate the license.

Training is fair use. The closed models wouldn't be impacted. Even if we assume laws gets changed and lawsuits happened, they just get settled and the closed source models would progress as usual (see Bartz v. Anthropic).

Meanwhile if somebody wants to go all "GPL AI" and only train their models on GPL compatible code, they'd just be restricting themselves. The amount of code they can train on shrinks drastically, the model quality ends up being garbage and nothing was won.

Further, assuming laws got changed, those models would now be incredible easy to attack, since any slip up in the training means the models need to be scraped. Unlike the big companies with their closed models, Open Source efforts do not have the money to license data nor the billions needed to settle lawsuits. It would mean the end of open models.


I am pretty sure most people get Anthropic's move. I also think "getting it" is perfectly compatible with being unhappy about it and voicing that opinion online.

OP is responding to an article that largely frames Anthropic as clueless.

I don't think it is intending to frame the move as clueless, but rather short-sighted. It could very well be a good move for them in the short term.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: