After Deep Blue Garry Kapsparav proposed "Centaur Chess"[1] where teams of humans and computers would complete with each other. For about a decade a team like that was superior to either an unaided computer or an unaided AI. These days pure AI teams tend to be much stronger.
How would pure ai ever be "much stronger" in this scenario?
That doesn't make any sense to me whatsoever, it can only be "equally strong", making the approach non-viable because they're not providing any value... But the only way for the human in the loop to add an actual demerit, you'd have to include time taken for each move into the final score, which isn't normal in chess.
But I'm not knowledgeable on the topic, I'm just expressing my surprise and inability to contextualize this claim with my minor experience of the game
You can be so far ahead of someone, their input (if you act on it) can only make things worse. That's it. If a human 'teams up' with chess AI today and does anything other than agree with its moves, it will just drag things down.
These human in the loop systems basically lists possible moves with likelihood of winning, no?
So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.
The point of this strategy was to mitigate traps, but this would now have to become inverted: the opponent AI would have to be able to gaslight the human into thinking he's stopping his AI from falling into a trap. While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit
>So how would the human be a demerit? It'd mean that the human for some reason decided to always use the option that the ai wouldn't take, but how would that make sense? Then the AI would list the "correct" move with a higher likelihood of winning.
The human will be a demerit any time it's not picking the choice the model would have made.
>While that might work in a few cases, the human would quickly learn that his ability to overrule the optimal choice is flawed, thus reverting it back to baseline where the human is essentially a non-factor and not a demerit
Sure, but it's not a Centaur game if the human is doing literally nothing every time. The only way for a human+ai team to not be outright worse than only ai is for the human to do nothing at all and that's not a team. You've just delayed the response of the computer for no good reason.
If you had a setup where the computer just did its thing and never waited for the human to provide input but the human still had an unused button they could press to get a chance to say something that might technically count as "centaur", but that isn't really what people mean by the term. It's the delay in waiting for human input that's the big disadvantage centaur setups have when the human isn't really providing any value these days.
But why would that be a disadvantage large enough to cause the player to lose, which would be necessary for
> pure AI teams tend to be much stronger.
Maybe each turn has a time limit, and a human would need "n moments" to make the final judgement call whereas the AI could delay the final decision right to the last moment for it's final analysis? So the pure AI player gets an additional 10-30s to simulate the game essentially?
Why? If the human has final say on which play to make I can certainly see them thinking they are proposing a better strategy when they are actually hurting their chances.
With intelligence of models seeming spikey/lumpy I suspect we'll see tasks and domains fall to AI one at a time. Some will happen quickly and others may take far longer than we expect.
I feel lucky that in 2008 I could go into the Military Industrial Complex and work in places where I could be confident the results wouldn't be things I'd find objectionable. That seems like a much tougher prospect in 2026.
Boom’s pivot to trying to build turbines for data centers wasn’t surprising when data center deployments started using turbines. Either their CEO saw one of the headlines or their investors forwarded it over and it became their new talking point.
What is interesting is how many people saw the Boom announcement and came to believe that Boom was a pioneer of this idea. They’re actually a me-too that won’t have anything ready for a long time, if they can even pull it off at all.
> What is interesting is how many people saw the Boom announcement and came to believe that Boom was a pioneer of this idea. They’re actually a me-too that won’t have anything ready for a long time, if they can even pull it off at all.
My first thought when seeing that article is “I can buy one of these right now from Siemens or GE, and I could’ve ordered one at any time in the last 50 years.”
Boom doesn’t actually have a turbine yet. Their design partner publicly pulled out of their contract with Boom a while ago.
Boom has been operating on vaporware for a while. It’s one of those companies I want to see succeed but whatever they’re doing in public is just PR right now. Until they actually produce something (other than a prototype that doesn’t resemble their production goals using other people’s parts) their PR releases don’t mean a whole lot.
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said "Master, I have heard that objects are a very good thing - is this true?" Qc Na looked pityingly at his student and replied, "Foolish pupil - objects are merely a poor man's closures."
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire "Lambda: The Ultimate..." series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying Master, I have diligently studied the matter, and now understand that objects are truly a poor man's closures." Qc Na responded by hitting Anton with his stick, saying "When will you learn? Closures are a poor man's object."
It's very important in this case to specify which orbit the satellite is going to be in. If you're in LEO like the international space station you spend all day inside the Van Allen Belt protected from all those charged particles that the sun is pumping out. You're still lacking the atmosphere's protection from cosmic rays but that's not a huge dosage.
If you go out to MEO then suddenly you're outside that protective magnetic shield and you have to deal with charged particles smashing into you and you want a large mass of water or wax shielding if you don't have radiation tolerant electronics.
SSO, a low earth orbit whose plane is perpendicular to the direction of the sun so it gets constant sunlight, is harsher than normal LEO orbits because it passes over the poles where the protection from the Earth's magnetic field is weakest, but it's still a lot better than higher orbits. This is probably where you want a datacenter to get constant sunlight and as much protection as possible.
I'll be the first to cheer if we get rid of industrial agriculture but there's an awful lot of land in the world that doesn't receive enough rain for farming but which is still fine grazing land and when used for grazing still supports most of its original ecology. And there's a lot of damaged, blemished, etc produce that pigs are happy to eat but which can't be sold in a supermarket.
I'd like to see meat consumption to something like half to a quarter of its current level rather than eliminate it outright.
OK, but (1) also a lot of good land is being used to feed livestock, the biomass of livestock is quite a bit higher than the biomass of humans; and (2) even reducing it just by a quarter is several times more than the combined impact of all the AI data centres.
> Are you ok with having a codebase that is effectively a black box?
When was the last time you looked at the machine code your compiler was giving you? For me, doing embedded development on an architecture without a mature compiler the answer is last Friday but I expect that the vast majority of readers here never look at their machine code. We have abstraction layers that we've come to trust because they work in practice. To do our work we're dependent on the companies that develop our compilers where we can at least see the output, but also companies that make our CPUs which we couldn't debug without a huge amount of specialized equipment. So I expect that mostly people will be ok with it.
>When was the last time you looked at the machine code your compiler was giving you?
You could rephrase that as “when was the last time your compiler didn’t work as expected?”. Never in my whole career in my case. Can we expect that level of reliability?
I’m not making the argument of “the LLM is not good enough”. that would brings us back to the boring dissuasion of “maybe it will be”.
The thing is that human langauge is ambiguous and subject to interpretation, so I think we will have occasionally wrong output even with perfect LLMs. That makes black box behavior dangerous.
We certainly can't expect that with LLMs now but neither could compiler users back in the 1970s. I do agree that we probably won't ever have them generating code without more back and forth where the LLM complains that its instructions were ambiguous and then testing afterwards.
For vaccines like the measles vaccine where it can entirely stop the spread in a vaccinated population this can be true until enough people think this way that measles starts spreading in your vicinity.
But with Covid-19 vaccination wasn't able to eliminate its spread so it mostly is about protecting yourself rather than protecting others.
Back when those machines existed, UB meant "the precise behaviour is not specified by the standard, the specific compiler for the specific machine chooses what happens" rather than the modern "a well-formed program does not invoke UB". For what it is worth, I compile all my code with -fwrapv et. al.
[1] https://en.wikipedia.org/wiki/Advanced_chess
reply