You're making a common mistake. WARN is for the _previous_ layoff. The way they execute these, they bump out your effective last day such that there is no WARN notice till after it's already announced.
"While WARN requires only 60 days’ advance notice, Amazon is providing at least 90 days’ notice to all affected employees before their separations are scheduled to occur. Affected employees who accept internal transfer opportunities at Amazon prior to their separation date will not be separated as a result of this action."
These people were told they were being laid off in October but remain on payroll. What Amazon is "bracing" for is another one of these announcements, much larger, announcing people who will actually separate from the company 90 days later. They will find out on or around the same day the WARN notice is posted.
The date of the notice is October 28th. The separation dates given in the letter are 90 days out from the date of the notice (for some technical reasons, some employees had a separation dates a bit further out.)
They do not. They have no qualms about eliminating more senior roles as necessary, and generally prefer to staff in a bottom-heavy way because, among other things, it's more frugal.
> Employee separations resulting from this action are expected to be permanent. The affected employees are not represented by a union or any other collective bargaining representative.
"Because referential integrity is a thing, and if you don't have all dependencies either explicitly declared or implicitly determinable in your plan, your cloud provider is going to enforce it for you."
No, you probably haven't read the conversation piece. The post is ultimately about switching providers because Google's service crosses a line from (1) targeted advertising to (2) using personal and confidential information for model training.
A service to clean up the UI does nothing to solve the issue at hand.
The dealbreaker was data usage for AI training, not UI:
"We are going to use your email to train our LLMs. I'm not okay with that... my confidential commercial information is NOT okay to use to train your models [...] So... goodbye Gmail."
We had XMPP, and even Google Chat used that in the early days.
It's not like users haven't had choice over the decades to choose software that runs on open standards. It's that the features and UX provided by closed software has been more compelling to them. Open standards and interoperability generally aren't features most people value when it comes to chat. They care mostly about what their friends and family are using.
The issue isn't closed vs open but business models. The reason most services don't support third-party clients is that their business model is based on advertising (aka wasting the user's time) and a third-party client would reduce said wasted time.
A proprietary/for-profit messenger can very well use open protocols and embrace third-party clients if their business model wasn't explicitly based on anti-productivity.
Right. Unfortunately, people have overwhelmingly voted with their wallets, and prefer to pay with their time and attention (and ignore the fact that they're being psychologically manipulated into buying random products and services) than with actual cash.
I expect you could get some people to pay for a messaging platform, but it would be a very small platform, and your business would not grow very much. And most of your users will still have to use other (proprietary, closed) messaging services as well, to talk to their friends and family who don't want to pay for your platform. While that wouldn't be a failure, I wouldn't really call that a significant win, either.
This is why legislation/regulation is the only way to make this happen. The so-called "free market" (a thing that doesn't really exist) can never succeed at this, to the detriment of us all.
The problem is that there's not much of a market for an ecosystem of commercial chat clients that use open standards underneath. It's not like it hasn't been tried. What ultimately ends up happening is the market becomes a race to the bottom, chat clients become a commodity product, and innovation ceases. It's essentially what happened with Web browsers and why we don't have a particularly robust for-profit market in that space.
Google Chat used XMPP to build an user base and then cut it off from the Jabber network. That's when I stopped using it. Or was it when it got integrated into Gmail? Then they rebranded it and binned each iteration several times.
IAAL but this is not legal advice. Consult an attorney licensed in your jurisdiction for advice.
In general, agents "stand in the shoes" of the principal for all actions the principal delegated to them (i.e., "scope of agency"). So if Amy works for Global Corp and has the authority to sign legal documents on their behalf, the company is bound. Similarly, if I delegate power of attorney to someone to sign documents on my behalf, I'm bound to whatever those agreements are.
The law doesn't distinguish between mechanical and personal agents. If you give an agent the power to do something on your behalf, and it does something on your behalf under your guidance, you're on the hook for whatever it does under that power. It's as though you did it yourself.
Look, just because an LLM thing is named "agent" doesn't mean it is "legally an agent".
If I were an attorney in court, I would argue that a "mechanical or automatic agent" cannot truly be a personal agent unless it can be trusted to do things only in line with that person's wishes and consent.
If an LLM "agent" runs amok and does things without user consent and without reason or direction, how can the person be held responsible, except for saying that they never should've granted "agency" in the first place? Couldn't the LLM's corporate masters be held liable instead?
That's where "scope of agency" comes in. It's no different than if Amy, as in my example, ran amok and started signing agreements with the mob to bind Global Corp to a garbage pickup contract, when all she had was the authority to sign a contract for a software purchase.
So in a case like this, if your agent exceeded its authority, and you could prove it, you might not be bound.
Keep in mind that an LLM is not an agent. Agents use LLMs, but are not LLMs themselves. If you only want your agent to be capable of doing limited actions, program or configure it that way.
That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to be a legal agent that forms binding contracts.
It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.
> That's due to authorized humans at the company setting up the LLMs to publish statements which are materially relied upon. Not because company officers have delegated legal authority to the LLM process to form binding contracts.
It's not that straightforward. A contract, at heart, is an agreement between two parties, both of whom must have (among other things) reasonable material reliance in each other that they were either the principals themselves or were operating under the authority of their principal.
I am sure that Air Canada did not intend to give the autonomous customer service agent the authority to make the false promises that it did. But it did so anyway by not constraining its behavior.
> It's basically the same with longstanding customer service "agents". They are authorized to do only what they are authorized to semantically express in the company's computer system. Even if you get one to verbally agree "We will do X for $Y', if they don't put that into their computer system it's not like you can take the company to court to enforce that.
I don't think that's necessarily correct. I believe the law (again, not legal advice) would bind the seller to the agent's price mistake unless 1/the customer knew it was a mistake and tried to take advantage of it anyway or 2/the price was so outlandish that no reasonable person would believe it. That said, there's often a wide gap between what the law requires and what actually happens. Nobody's going to sue over a $10 price mistake.
Yes, but neither airline agents nor LLM agents hold themselves out as having legal authority to bind their principals in general contracts. To the extent you could get an LLM to state such a thing, it would be specious and still not binding. Someone calling the airline support line and assuming the airline agent is authorized to form general contracts doesn't change the legal situation where they are not, right?
Fundamentally, running `sdkmanager --licenses` does not consummate a contract [0]. Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms. If an LLM does this unbeknownst to a user, this just means there is one less indication that the user is aware of the license. Of course this butts up against the limits to litigation you pointed out, which is why contracts of adhesion mostly revolve around making users disclaim legal rights, and upholding copyright (which can be enforced out of band on the scale it starts to matter).
[0] if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed.
(I edited the sentence you quoted slightly, to make it more explicit. I don't think it changes anything but if it does then I am sorry)
> neither airline agents nor LLM agents hold themselves out as having legal authority to bind their principals in general contracts.
You don't have to explicitly hold yourself out as an agent to be treated as one. Circumstances matter. There's an "apparent authority" doctrine of agency law I'd encourage you to study.
> Rather running this command is an indication that the user has been made aware that there is a non-negotiated contract they will be entering into by using the software - it's the continued use of the software which indicates acceptance of the terms.
> if it did then anyone could trivially work around this by skipping the check with a debugger, independently creating whatever file/contents this command creates, or using software that someone else already installed.
Courts tend not to take kindly to "hacking attempts" like this, and you could find yourself liable for copyright infringement, trespass to chattels, or possibly even criminal charges under CFAA if you do.
Let me put it this way: U.S. and English law are stacked squarely in favor of the protection of property rights.
> Courts tend not to take kindly to "hacking attempts" like this
Yes, because law is generally defined in terms of intent, knowledge, and other human-level qualities. The attempt to "hack around" the specific prompt is irrelevant because the specific prompt is irrelevant, just like the specific weight of paper a contract is printed on is irrelevant - any contract could define them as relevant, but it's generally not beneficial to do so.
> There's an "apparent authority" doctrine of agency law I'd encourage you to study
Sure, but this still relies upon an LLM agent being held out as some kind of bona fide legal agent capable of executing some legally binding agreements. In this case there isn't even a counterparty who is capable of making that judgement whether the command is being run by someone with the apparent intent and authority to legally bind. So you're essentially saying there is no way for a user to run a software program without extending it the authority to form legal contracts on your behalf. I'd call this a preposterous attempt to "hack around" the utter lack of intent on the part of the person running the program.
The instruction prompt is absolutely relevant: it conveys to the agent the scope of its authority and the principal's intent, and would undoubtedly be used as evidence if a dispute arose over it. It's not different in kind from instructions you would give a human being.
> this still relies upon an LLM agent being held out as some kind of bona fide legal agent capable of executing some legally binding agreements
Which it can...
> You're essentially saying there is no way to run a software program without extending it the legal authority to form legal contracts on your behalf.
I'm not saying that at all. Agency law is very mature at this stage, and the test to determine that an actor is an agent and whether it acted within the scope of its authority is pretty clear. I'm not going to lay it all out here, so please go study it independently.
I'm also not entirely sure what your angle here is: are you trying to say that an LLM-based agent cannot under any circumstances be treated as acting on its principal's behalf? Or are you just being argumentative and trying to find some angle to be "right"?
By "prompt" I was referring to the prompting of the user, by a program such as `sdkmanager --licenses`.
If a user explicitly prompted an LLM agent to "accept all licenses", then I'd agree with you.
> Which it can...
It can be held out as a legal agent, sure. But in this case, is it? Is the coding agent somehow advertising itself to the sdkmanager program and/or Google that it has the authority to form legal contracts on behalf of its user?
> I've counseled you already to study the law - go do that before we discuss this further
While this is a reasonable ask for continuing the line of discussion, I'd say it's a lot of effort for a message board comment. So I won't be doing this, at least to the level of being able to intelligently respond here.
Instead I would ask you what you would say are the minimum requirements to be able to have an LLM coding agent executing commands on your own machine, yet explicitly not having the authority to form legally binding contracts.
(obviously I'm not asking this in the capacity of binding legal advice. and obviously one would still be responsible for any damage said process caused)
What makes you think I'm not a lawyer? The point is that we're not in court, we're in a pseudonymous open forum on the Internet, where everyone has a stinky opinion, where actual attorneys are posting disclaimers that they are explicitly not giving legal advice.
Correct, but the "if Amy works for Global Corp and has the authority to sign legal documents on their behalf" does a lot of work here.
At $WORK, a multi-billion company with tens of thousands of developers, we train people to never "click to accept", explaining it like "look, you wouldn't think of sitting down and signing a contract binding the whole MegaCorp; what make you think you can 'accept' something binding the company?"
I admit we're not always successful (people still rarely click), but at least we're trying.
> At $WORK, a multi-billion company with tens of thousands of developers, we train people to never "click to accept", explaining it like "look, you wouldn't think of sitting down and signing a contract binding the whole MegaCorp; what make you think you can 'accept' something binding the company?"
That sounds pretty heavy-handed to me. Their lawyers almost certainly advised the company to do that--and I might, too, if I worked for them. But whether it's actually necessary to keep the company out of trouble....well, I'm not so sure. For example, Bob the retail assistant at the local clothing store couldn't bind his employer to a new jeans supplier contract, even if he tried. This sounds like one of those things you keep in your back pocket and take it out as a defense if someone decides to litigate over it. "Look, Your Honor, we trained our employees not to do that!"
At least with a mechanical agent, you can program it not to be even capable of accepting agreements on the principal's behalf.
If they say they don't, and they do, then that's fraud, and they could be held liable for any damages that result. And, if word got out that they were defrauding customers, that would result in serious reputational damage to Apple (who uses their security practices as an industry differentiator) and possibly a significant customer shift away from them. They don't want that.
The government would never prosecute a company for fraud where that fraud consists of cooperating with the government after promising to a suspected criminal that they wouldn't.
That's not the scenario I was thinking of. There are other possibilities here, like providing a decryption key (even if by accident) to a criminal who's stolen a business's laptop, or if a business had made contractual promises to their customers, based on Apple's promises to them. The actions would be private (civil) ones, not criminal fraud prosecution.
Besides, Apple's lawyers aren't stupid enough to forget to carve out a law-enforcement demand exception.
Cooperating with law enforcement cannot be a fraud. Fraud is lying to get illegal gains. I think, it's legally ok to lie if the goal is to catch a criminal and help the government.
For example, in 20th century, an European manufacturer of encryption machines (Crypto AG [1]) made a backdoor at request of governments and never got punished - instead it got generous payments.
None of these really match the scenario we're discussing here. Some are typical big company stuff, some are technical edge cases, but none are "Apple lies about a fundamental security practice consistently and with malice"
That link you provided is a "conspiracy theory," even by the author's own admission. That article is also outdated; OCSP is as dead as a doornail (no doubt in part because it could be used for surveillance) and they fixed the cleartext transmission of hardware identifiers.
Are you expecting perfection here? Or are you just being argumentative?
> That link you provided is a "conspiracy theory," even by the author's own admission.
"Conspiracy theory" is not the same as a crazy, crackhead theory. See: Endward Snowden.
Full quote from the article:
> Mind you, this is definitionally a conspiracy theory; please don’t let the connotations of that phrase bias you, but please feel free to read this (and everything else on the internet) as critically as you wish.
> and they fixed the cleartext transmission of hardware identifiers
Have you got any links for that?
> Are you expecting perfection here? Or are you just being argumentative?
I expect basic things people should expect from a company promoting themselves as respecting privacy. And I don't expect them to be much worse than GNU/Linux in that respect (but they definitely are).
It was noted at the bottom of the article as a follow up.
> I expect basic things people should expect from a company promoting themselves as respecting privacy. And I don't expect them to be much worse than GNU/Linux in that respect (but they definitely are).
The problem with the word “basic” is that it’s entirely subjective. What you consider “basic,” others consider advanced. Plus the floor has shifted over the years as threat actors have become more knowledgeable, threats more sophisticated, and technologies advanced.
Finally, the comparison to Linux doesn’t make a lot of sense. Apple provides a solution of integrated hardware, OS, and services. Linux has a much smaller scope; it’s just a kernel. If you don’t operate services, then by definition, you don’t have any transmitted data to protect. Nevertheless, if you consider the software packages that distros package alongside that kernel, I would encourage you to peruse the CVE databases to see just how many security notices have been filed against them and which remain open. It’s not all sunshine and roses over in Linux land, and never has been.
At the end of the day, it's all about how you weigh the evidence. If those examples are sufficient to tip the scales for you, that's your choice. However, Apple's overall trustworthiness--particular when it comes to protecting people's sensitive data--remains high for in the market. Even the examples you posted aren't especially pertinent to that (except for iCloud Keychain, where the complaint isn't whether Apple is securely storing it, but the fact that it got transmitted to them in the first place, and there exists some unresolved ambiguity about whether it is appropriately deleted on demand).
Terrible security... compared to what? Some ideal state that exists in your head, or a real-world benchmark? Do you expect them to ignore lawful orders from governments as well?
You’re being extremely argumentative all over the comments to this story. Do you yourself own any solar panels? Your ceaseless naysaying constantly contradicts people’s lived experience (including mine) as owners.
Focus on solutions, not trying to be right. It’s aggravating.
> Your ceaseless naysaying constantly contradicts people’s lived experience (including mine) as owners.
Also, like, every study on this matter. The efficiency drop from being dirty for vaguely modern solar panels is _tiny_; below 5% and potentially below 1%.
reply