Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would you need to create a local account? You can just not choose to store the keys in your Microsoft account during BitLocker setup: https://www.diskpart.com/screenshot/en/others/windows-11/win...

Admittedly, the risks of choosing this option are not clearly laid out, but the way you are framing it also isn't accurate





All "Global Reader" accounts have "microsoft.directory/bitlockerKeys/key/read" permission.

Whether you opt in, or not, if you connect your account to Microsoft, then they do have the ability fetch the bitlocker key, if the account is not local only. [0] Global Reader is builtin to everything +365.

[0] https://github.com/MicrosoftDocs/entra-docs/commit/2364d8da9...


They're Microsoft and it's Windows. They always have the ability to fetch the key.

The question is do they ever fetch and transmit it if you opt out?

The expected answer would be no. Has anyone shown otherwise? Because hypotheticals that they could are not useful.


> Because hypotheticals that they could are not useful.

Why? They are useful to me and I appreciate the hypotheticals because it highlights the gaps between "they can access my data and I trust them to do the right thing" and "they literally can't access my data so trust doesn't matter."


Considering all the shenanigans Microsoft has been up to with windows 11 and various privacy, advertising, etc. stuff?

Hell, all the times they keep enabling one drive despite it being really clear I don’t want it, and then uploading stuff to the cloud that I don’t want?

I have zero trust for Microsoft now, and not much better for them in the past either.


This 100% happens, they’ve done it to at least one of my clients in pretty explicit violations of HIPAA (they are a very small health insurance broker), even though OneDrive had never been engaged with, and indeed we had previously uninstalled OneDrive entirely.

One day they came in and found an icon on their desktop labeled “Where are my files?” that explained they had all been moved in OneDrive following an update. This prompted my clients to go into full meltdown mode, as they knew exactly what this meant. We ultimately got a BAA from Microsoft just because we don’t trust them not to violate federal laws again.


What do Entra role permissions have to do with Microsoft's ability to turn over data in its possession to law enforcement in response to a court order?

This is for the _ActiveDirectory_. If your machine is joined into a domain, the keys will be stored in the AD.

This does not apply to standalone devices. MS doesn't have a magic way to reach into your laptop and pluck the keys.


> MS doesn't have a magic way to reach into your laptop and pluck the keys.

Of course they do! They can just create a Windows Update that does it. They have full administrative access to every single PC running Windows in this way.


People really pay too little attention to this attack avenue.

It's both extremely convenient and very unlikely to be detected; especially given that most current systems are associated to an account.

I'd be surprised if it's not widely used by law enforcement, when it's not possible to hack a device in more obvious ways.

Please check theupdateframework.io if you have a say in an update system.


I actually misremembered what theupdateframework.io is, I thought it provided more protections...

Isn't it the same with many Linux distros?

Updates are using root to run?


It's largely the same for all automatic updating systems that don't protect against personalized updates.

I don't know the status of the updating systems of the various distributions; if some use server-delivered scripts run as root, that's potentially a further powerful attack avenue.

But I was assuming that the update process itself is safe; the problem is that you usually don't have guarantees that the updates you get are genuine.

So if you update a component run as root, yes, the update could include malicious code that can do anything.

But even an update to a very constrained application could be very damaging: for example, if it is for a E2EE messaging application, it could modify it to have it send each encryption key to a law enforcement agency.


> the problem is that you usually don't have guarantees that the updates you get are genuine

A point of order: you do have that guarantee for most Linux distro packages. All 70,000 of them in Debian's case. And all Linux distro distribute their packages anonymously, so they can never target just one individual.

That's primarily because they aren't trying to make money out of you. Making money requires a billing relationship, and tracking which of your customers own what. Off the back of that governments can demand particular users are targeted with "special" updates. Australia in particular demands commercial providers do that with its "Assistance and Access Bill (2018)" and I'm sure most governments in the OECD have equivalents.


> so they can never target just one individual

You assume the binary can't just have a machine check in itself that activates only on the target's computer.


Yes, they can do that. But they can't select who gets the binary, so everybody gets it. Debian does reproducible builds on trusted machines so they would have to infect the source.

You can safely assume the source will be viewed by a lot of people over time, so the change will be discovered. The source is managed mostly by git, so there would be history about who introduced the change.

The reality is open source is so far ahead on proprietary code on transparency, there is almost no contest at this point. If a government wants to compromise proprietary code it's easy, cheap, and undetectable. Try the same with open source it's still cheap, but the social engineering ain't easy, and it will be detected - it's just a question of how long it takes.


Not really, but it's quite complex for Linux because there are so many ways one can manage the configuration of a Linux environment. For something high security, I'd recommend something like Gentoo or NixOS because they have several huge advantages:

- They're easy to setup and maintain immutable and reproducible builds.

- You only install the software you need, and even within each software item, you only build/install the specific features you need. For example, if you are building a server that will sit in a datacentre, you don't need to build software with Bluetooth support, and by extension, you won't need to install Bluetooth utilities and libraries.

- Both have a monolithic Git repository for packages, which is advantageous because you gain the benefit of a giant distributed Merkle tree for verifying you have the same packages everyone else has. As observed with xz-utils, you want a supply chain attacker to be forced to infect as many people as possible so more people are likely to detect it.

- Sandboxing is used to minimise the lines of code during build/install which need to have any sort of privileges. Most packages are built and configured as "nobody" in an isolated sandbox, then a privileged process outside of the sandbox peeks inside to copy out whatever the package ended up installing. Obviously the outside process also performs checks such as preventing cool-new-free-game from overwriting /usr/bin/sudo.

- The time between a patch hitting an upstream repository and that patch being part of a package installed in these distributions is fast. This is important at the moment because there are many efforts underway to replace and rewrite old insecure software with modern secure equivalents, so you want to be using software with a modern design, not just 5 year old long-term-support software. E.g. glycin is a relatively new library used by GNOME applications for loading of untrusted images. You don't want to be waiting 3 years for a new long-support-support release of your distribution for this software.

No matter which distribution you use, you'll get some common benefits such as:

- Ability to deploy user applications using something like Flatpak which ensures they are used within a sandbox.

- Ability to deploy system applications using something like systemd which ensures they are used within a sandbox.

Microsoft have long underinvested in Windows (particularly the kernel), and have made numerous poor and failed attempts to introduce secure application packaging/sandboxing over the years. Windows is now akin to the horse and buggy when compared to the flying cars of open source Linux, iOS, Android and HarmonyOS (v5+ in particular which uses the HongMeng kernel that is even EAL6+, ASIL D and SIL 3 rated).


Furthermore it seems like it's specific to Azure AD, and I'm guessing it probably only has effect if you enable to option to back up the keys to AD in the first place, which is not mandatory

I'd be curious to see a conclusive piece of documentation about this, though


Regular AD also has this feature, you can store the encryption keys in the domain controller. I don't think it's turned on by default, but you can do that with a group policy update.

That's for Entra/AD, aka a workplace domain. Personal accounts are completely separate from this. (Microsoft don't have a AD relationship with your account; if anything, personal MS accounts reside in their own empty Entra forest)

They could also just push an update to change it anyways to grab it.

If you really don't trust Microsoft at all then don't use Windows.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: