Just because you can solve problems with one class of tools doesn’t mean another class is pointless. A whole new class of problems just became solvable.
> A whole new class of problems just became solvable.
This is almost by definition not really true. LLMs spit out whatever they were trained on, mashed up. The solutions they have access to are exactly the ones that already exist, and for the most part those solutions will have existed in droves to have any semblance of utility to the LLM.
If you're referring to "mass code output" as "a new class of problem", we've had code generators of differing input complexity for a very long time; it's hardly new.
So what do you really mean when you say that a new class of problems became solvable?
Valve selling skins is just so trivial relative to dopamine-inducing doom-scrolling, social media in general, the toxicity of the news cycle, I can keep going.
It would be super democrat-american to address valves loot boxes before, say, fucking healthcare.
We need a government priority Jira board of things that need to be addressed. Loot boxes _might_ make the backlog.
I do think that there's a meaningful difference between writing code that was bad (which I definitely did and do) and writing code where I didn't know what each line did.
early on when I was doing iOS development I learned that "m34" was the magic trick to make flipping a view around have a nice perspective effect, and I didn't know what "m34" actually meant but I definitely knew what the effect of the line of code that mutated it was...
Googling on it now seems like a common experience for early iOS developers :)
> Jacobin is not a whole journal on literal communism.
It’s a magazine with a professed socialist view point but it’s more aligned with left-of-center American politics. Think Sanders or Mamdani rather than Stalin or Mao.
> Sanders or Mamdani
Sanders and Mamdani are about as far left of center as one can get at the moment, such that they almost meld into Stalin or Mao.
The mental gymnastics you’re doing to blunt that fact is absolutely incredible.
> Sanders and Mamdani are about as far left of center as one can get at the moment
No, they aren’t. They are about as far left of center as you can get and be competitive in US elections, maybe, but that’s a very different thing. There’s a lot to their left (as you an see from the by the opposition from leftist as sellouts to capitalist/imperialist/etc. institutions both have.)
> Sanders and Mamdani are about as far left of center as one can get at the moment, such that they almost meld into Stalin or Mao.
So, when's Mamdani's Great Purge coming? Do you think he's gonna stand by the standards of his historical ideological equivalent, Stalin, and execute a couple hundreds of thousands of elites (if we're going by the same proportions as the USSR), or is he going all out - maybe he could get a million deaths in? Maybe he could also start a famine or two on the way there?
The utter insanity of American politics baffles me. "Anything left is abhorrent totalitarian communism in the making" isn't just a meme, it's a foundational piece of mainstream American ideology that has been at its core for nearly a century now.
While I agree that the US has a historical obsession with communism, "anything right is abhorrent totalitarian fascism in the making" has been a far more commonly stated position for the last decade. At this point, it's necessary to regard both "communism" and "fascism" as simple pejoratives and focus on the specific policies being discussed.
This is unfortunate but perhaps inevitable. There are not that many left that remember the horrors of either ideology clearly.
Stalin was an ideological authoritarian that executed political rivals and used lethal force, price controls, and other governmental tools to control the economy and the general working population. The idea that Sanders and Mambani advocate anything close to that is laughable.
The rhetoric on both the right and left that liken today's politics to extremism in the 20th century is a ridiculous anachronism that needs to be called out more often.
I always joke that Google pays for a dedicated developer to spend their full time just to make pelicans on bicycles look good. They certainly have the cash to do it.
That's such a silly argument. X, OpenAI and others have large Saudi investments. In the grant scheme of things the US is largely indebted to China and Japan.
The charges you mention are likely all from hospitals. How about Pluvicto for advanced prostate cancer at $42,500 per monthly dose? All the players are in on it.
The sound way to manage costs and avoid these games is via Medicare for all, with premiums paid by progressive rate taxation of income. Maybe even wealth beyond a very large amount.
Based on what? Why even leave this comment if you’re just going to say “would likely be worse off” without giving literally any evidence or even suggestion of why.
Insurance is a pool. The bigger your pool the more you spread the risk/load. It’s brain dead simple. Medical care is a human right, beyond that.
Nothing about our system makes any sense and it is built to pad so many pockets in entirely opaque ways between you and the care you actually receive. Cut out several layers of middlemen and the costs go down. God forbid you have an accident and you end up at the wrong hospital when the one down the road is in-network but the one they took you to is out-of-network and you wake up owing thousands of dollars.
I had pretty good marketplace insurance this year but the plan I’m on now isn’t even offered anymore and if I got the next closest offered plan I’d be paying 6X as much for the premiums with higher copays on top. I’ll be switching to my union offered plan instead which is much better than the new marketplace plan but still worse than the marketplace insurance I had before.
> God forbid you have an accident and you end up at the wrong hospital when the one down the road is in-network but the one they took you to is out-of-network and you wake up owing thousands of dollars.
If you examine the statement of benefits for your plan, you will find that it says something similar to this:
> Emergency Services are covered at the in-network cost-sharing level as required by applicable state or federal law if services are received from a non participating (out-of-network) provider.
> The member is responsible for applicable in-network cost-sharing amounts (any deductible, copay or coinsurance). The member is not responsible for any charges that may be made in excess of the allowable amount.
You’re right. The No Surprises Act did make this a lot better. However it still doesn’t cover ground transport (and specific state laws do in some cases.)
Additionally for post-stabilization care the hospital is going to shove a lot of papers in your face and they’re probably not going to tell you that one of them is the one that says you agree to pay to whatever those services and waive your protection against balance billing. Yes they’re supposed to present it on its own and with your full consent and yes you can dispute that but people sign the forms and then still get screwed.
I think it's telling that people are shocked at the assertion I just made, which is not complicated or outlandish or hard to understand and is in fact backed up by referendum and attempted implementation results for state-level programs. I think two big things are happening that fog people's understanding of this issue:
First, there's a widespread belief that M4A is popular, based on public opinion polling. The problem is that you can make almost anything popular in public opinion polling, and a lot of public opinion polling is deliberately run by interest groups to generate narratives about popularity. It's true: the "M4A" that poll respondents support would be enormously popular: it's proposed as abstraction with no clear tradeoffs. When you confront voters with the prospect of increased taxes and the loss of their current insurance policies, the wheels come off the wagon.
The second big factor is that the demographics of people with employer-provided coverage --- the majority of all non-Medicare covered people in the US --- are not what you'd expect. As soon as you stipulate employer coverage, the cohort you're describing excludes basically all fixed-income and Medicaid-eligible households. The median household income of a family with employer-provided health insurance is closer to $120k than it is to $50k.
For those households, M4A is not a very compelling deal:
* There is a very clear trend in the data for them to already be satisfied with their existing health care.
* The visible component of their insurance spending (their out-of-pocket, excluding employer side payments) is usually quite small compared to total spending.
* M4A would mechanically eliminate the availability of existing plans (unless you came up with a truly weird and distortionate system of tax incentives to keep Anthem and United and Aetna policies going).
Best case: costs that are hidden from those households today become visible, and you hope people are chill about that (in sort of the same way we hoped that people would be chill about inflation given wage increases outpacing it --- see how that went). Worst case, a lot of these households would lose their existing, favored insurance plans and pay more.
Useful here to note that broad taxes on the middle and especially upper-middle class are how Europe funds generous social service packages; you can't get there by taxing the bejeezus out of billionaires. You should do that anyways, just because it's a good idea, but there aren't enough of them to pay the absolutely gobsmacking cost of a single-payer health system in one of the wealthiest large countries in the world.
I'll cop to this: what I wrote last night, about "currently insured" people, was way too vague. I should have said "households with employer-provided health coverage" (again: that's most non-Medicare households). I plead strep throat; you're going to have to give me a break on clarity today.
Sorry but I reject this thinking. You’re essentially saying that Medicare for all is bad because it’ll seem to cost more because the way the money works isn’t obscured so people will be mad and that it has to be worse than their existing policies.
I’m still not seeing how or why it has to be worse. This just seems like an assumption you’re making. Also sure the exact existing policy you have won’t be available by definition because the system has entirely changed but once again if you want private insurance you will still be able to get it, as is the case in other countries with socialized medicine.
Also really don’t see why you would say that the polls that say people want socialized medicine are rigged and not-representative but the polls that you’re saying show that most people with private insurance are happy with it are accurate. Not really sure how that stands to reason.
I really feel like the argument you’re making here boils down to M4A is bad because it has to be worse and people who have private insurance now are happy with their plans and could only have them replaced with something that would be worse. Or even more simply: Change is scary so I guess we’re stuck with the current system and actually people like it so don’t rock the boat.
Also the median income for someone with employer provided healthcare is 120K? I’m going to need some data on that. Also you’re then cutting out everyone with marketplace insurance which is 24 million people.
More people are poised to lose Medicaid and my marketplace insurance plan, if I chose to accept it for next year was going to cost me 6X for the monthly premiums and require co-pays I don’t have before as well as much larger copays for ones I did.
I’m going to be completely honest. I don’t care if people making 120K/year are upset if their visible cost for healthcare is more obvious or not. From 2024 census data 41.2% of households made above 100K annually. That number becomes roughly 33% when you step it up to $150K/year and drops to something like 12% when you get to $200K/year. By the time you get to $400K/year you’re at like 3%.
Also households as a unit isn’t necessarily representative of the distribution of people within them.
I reject the idea that government system are inherently bad and so we can’t have them. I reject the premise that the wealthy will be forced to have worse healthcare to subsidize the majority of Americans. I absolutely reject any notion that our private healthcare as it exists is efficient, affordable and the superior system.
I didn't say Medicare For All was bad. I said a large cohort of existing insured people would be worse off under it. Those are different claims. Whether or not I think it's good has nothing to do with whether or not what I said was correct.
What I think is funny about this is, if I had left a one-line comment saying "this CEO's story about his health insurance costs tells me we all need M4A", nobody would have blinked. Instead, I made a somewhat skeptical observation about it, and got messages demanding I "show my work", or like this one, about how you "reject my thinking".
If people understand and strongly support the policy, they should probably make a point of not being totally bumfuzzled by arguments about it!
Well you can’t prove a negative so I’m not sure how useful a theoretical one line comment about a CEO saying his insurances means we need M4A would be received.
Regardless if you’re not willing to support your argument that’s fine, but at the same time if you’re going to put something out there and and then be upset if other people being skeptical of your skepticism then I don’t know what to tell you.
I still don’t really see how anything you’ve offered necessarily means people who currently have employer provided private insurance plans will be worse off. I especially don’t see it because people with incomes like you proposed the median income for households with employer provided insurance plans often have employer provided private insurance plans in countries that also have a public health system.
I guess maybe here is the meat of it and what matters. How are you defining worse off? Are you defining it based on quality of care/outcomes or in a financial sense? Either way seems pretty speculative to me but I’d be interested to know which (or both) of those you think makes them worse off.
What argument did I not support? The one you assumed I was making, but did not actually make? You still haven't responded to the actual argument I did make.
I agree by the way that a one line comment of “show your work” is not useful or constructive, much like your original one line comment. (I don’t mean that as a slam against you either, I appreciate that you actually followed up with additional information)
I disagree that I’m not responding to your actual argument and am specifically asking you to clarify the terms of what “worse off” means so that I can address it with more specificity or at least understand what you’re saying.
I still think citing an opinion poll to argue that people are happy with their employer insurance while also making an argument about how opinion polling is deeply flawed is a very strange way to back up your own argument.
I have yet to actually hear anything that supports the idea that people with employer provided insurance will be worse off because of M4A other than you saying they the way the costs would be less obscured means people would be more upset. This wasn’t even an argument about the real cost of M4A vs Prost insurance, it was just a statement saying that the money looks different.
Sorry, I can't follow any of this. It sounds like you want to have an argument about whether M4A is better than our current system. I'm not a good debate partner for that.
> something fundamental has changed that enables a computer to pretty effectively understand natural language.
You understand how the tech works right? It's statistics and tokens. The computer understands nothing. Creating "understanding" would be a breakthrough.
Edit: I wasn't trying to be a jerk. I sincerely wasn't. I don't "understand" how LLMs "understand" anything. I'd be super pumped to learn that bit. I don't have an agenda.
I think it is fair to say that AIs do not yet "understand" what they say or what we ask them.
When I ask it to use a specific MCP to complete a certain task, and it proceeds to not use that MCP, this indicates a clear lack of understanding.
You might say that the fault was mine, that I didn't setup or initialize the MCP tool properly, but wouldn't an understanding AI recognize that it didn't have access to the MCP and tell me that it cannot satisfy my request, rather than blindly carrying on without it?
LLMs consistently prove that they lack the ability to evaluate statements for truth. They lack, as well, an awareness of their unknowing, because they are not trying to understand; their job is to generate (to hallucinate).
It astonishes me that people can be so blind to this weakness of the tool. And when we raise concerns, people always say
"How can you define what 'thinking' is?"
"How can you define 'understanding'?"
These philosophical questions are missing the point. When we say it doesn't "understand", we mean that it doesn't do what we ask. It isn't reliable. It isn't as useful to us as perhaps it has been to you.
You can make categorical judgements on many things that are hard to define. Often by taking the opposite approach. It's hard to define what consciousness itself is yet I can define what it isn't and that teases a lot out.
So I can categorically say LLM's do not understand by quite literally understanding what NOT understanding is.
We know what LLM's are and what they are NOT.
Please see my earlier comment above:
> LLM's do not think, understand, reason, reflect, comprehend and they never shall.
How do you know what kind of "understanding" a python has? Why python and not a lizard? Or a bird? What method do you use for evaluating this? Does a typical python do what you tell your ai agent to do?...
C'mon, this comparison seems to be very, very unscientific. No offense...
Before one may begin to understand something one must first be able to estimate the level of certainty. Our robot friends, while really helpful and polite, seem to be lacking in that department. They actually think the things we've written on the internet, in books, academic papers, court documents, newspapers, etc are actually true. Where the humans aren't omniscient it fills the blanks with nonsense.
> Where the humans aren't omniscient it fills the blanks with nonsense
As do most humans. People lie. People make things up to look smart. People fervently believe things that are easily disproved. Some people are willfully ignorant, anti-science, anti-education, etc.
The problem isn't the transformer architecture... it is the humans who advertise capabilities that are not there yet.
There are human beings who believe absolutely insane and easily disprovable things. Even in the face of facts they continue to remain willfully ignorant.
Humans can convince themselves of almost anything. So I don’t understand your point.
Yes because working on co-pilot makes one well educated in philosophy of mind.
The standard, meaningless, HN appeal to authority. "I worked at Google, therefore I am an expert on both stringing a baroque lute and the finer points of Lao cooking. "
Gemini 3 gives a nice explanation if asked "can you explain how you don't really understand anything"
I’ve worked with many neuroscience researchers during my career. At a minimum, I’m extremely well-read on the subject of cognition.
I am not going to lie or hide my experience. The world is a fucked up place because we no longer respect “authority”. I helped build one of these systems; my opinion is as valid as yours.
Yours is the “standard meaningless” response that adds zero technical insight. Let’s talk about supportive tracing or optimization of KV values during pre-training and how those factors impact the apparent “understanding” of the resulting model.
> At this point I’d argue that humans “hallucinate” and/or provide wrong answers far more often than SOTA LLMs.
Humans are remarkably consistent in their behavior in trained environments. That's why we trust humans to perform dangerous, precise and high stakes tasks. Humans have the meta-cognitive abilities to understand when their abilities are insufficient or when they need to reinforce their own understanding, to increase their resilience.
If you genuinely believe humans hallucinate more often, then I don't think you actually do understand how copilot works.
There is a qualitative difference between humans and LLM's 'hallucinating' (if we can even apply this terminology to humans which I contend is in-appropriate).
I'd add a simple thought experiment. A poor student doing a multi-choice exam paper and subsequently achieving a poor mark lets say 30% and now a child of 10 years attempting the same paper and achieving say 50%. Looked quantitively a perspective arises attributing understanding to a child who has chanced 50% on a multi-choice paper when compared to a student having studied the subject at hand.
Qualitatively however and we know this intuitively it is certainly NOT the case that the child of 10 comprehends or understands more than the poor student.
Your response is crazy to me. Humans are known to be remarkably inconsistent in behavior. Please read ‘Thinking Fast and Slow’ or at least go back your HS psych 101 notes.
> Humans have the meta-cognitive abilities to understand when their abilities are insufficient or when they need to reinforce their own understanding
> as someone who is a sociopath completely devoid of ethics
Ah yes... the hundred thousand researchers and engineers who work at MS are all evil. Many people who've made truly significant contributions to AI have either worked directly (through MS Research) or indirectly (OpenAI, Anthropic, etc) at MS. ResNet and concepts like Differential Privacy were invented there.
What about the researchers at Stanford, Carnegie Mellon, and MIT who receive funding from companies like MS? Are they all evil sociopaths, too? Greg Hinton's early research was funded by Microsoft btw.
I originally joined MS in the early 90s (then retired) and came back to help build Copilot. The tech was fantastic to work with, we had an amazing team, and I am proud of what we accomplished.
You seem slightly confused between the people who invent technology and the assholes who use it for evil. There is nothing evil about the transformer. Humans are the problem.
T2 would have been a different movie if Miles Dyson just said to Sarah Connor: "The tech was fantastic to work with, we had an amazing team, and I am proud of what we accomplished."
We could use a little more kindness in discussion. I think the commenter has a very solid understanding on how computer works. The “understanding” is somewhat complex but I do agree with you that we are not there yet. I do think that the paradigm shift though is more about the fact that now we can interact with the computer in a new way.
The end effect certainly gives off "understanding" vibe. Even if method of achieving it is different. The commenter obviously didn't mean the way human brain understands
Are we even sure we understand the hardware? My understanding is even that is contested, for example orchestrated objective reduction, holonomic brain theory or GVF theory.
“You understand how the brain works right? It’s neurons and electrical charges. The brain understands nothing.”
I’m always struck by how confidently people assert stuff like this, as if the fact that we can easily comprehend the low-level structure somehow invalidates the reality of the higher-level structures. As if we know concretely that the human mind is something other than emergent complexity arising from simpler mechanics.
I’m not necessarily saying these machines are “thinking”. I wish I could say for sure that they’re not, but that would be dishonest: I feel like they aren’t thinking, but I have no evidence to back that up, and I haven’t seen non-self-referential evidence from anyone else.
> other than the propulsion and landing gear, and construction materials
"Apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh water system and public health, what have the Romans ever done for us?" Monty Python's Life of Brian.
P.S. It is relative, but quite a lot of differences IMHO.
Yes indeed, but they still use wings, fly in the air and so on.
Artificial neural networks have very little in common with real brains and have no structural or functional similarities besides "they process information, and they have things called neurons". They can perform some of the same tasks though, like how a quadcopter can perform some of the duties as a homing pigeon.
"I don't "understand" how LLMs "understand" anything."
Why does the LLM need to understand anything. What today's chatbots have achieved is a software engineering feat. They have taken a stateless token generation machine that has compressed the entire internet's vocabulary to predict the next token and have 'hacked' a whole state management machinery around it. End result is a product that just feels like another human conversing with you and remembering your last birthday.
Engineering will surely get better and while purists can argue that a new research perspective is needed, the current growth trajectory of chatbots, agents and code generation tools will carry the torch forward for years to come.
If you ask me, this new AI winter will thaw in the atmosphere even before it settles on the ground.
LLMs activate similar neurons for similar concepts not only across languages, but also across input types. I’d like to know if you’d consider that as a good representation of “understanding” and if not, how would you define it?
Anthropic is pretty notorious for peddling hype. This is a marketing article - it has not undergone peer-review and should not be mistaken for scientific research.
If i could understand what the brain scans actually meant, I would consider it a good representation. I don't think we know yet what they mean. I saw some headline the other day about a person with "low brain activity" and said person was in complete denial about it, I would be too.
As I said then, and probably echoing what other commenters are saying - what do you mean by understanding when you say computers understand nothing? do humans understand anything? if so, how?
Does a computer understand how hot or cold it is outside? Does it understand that your offspring might be cranky because they’re hungry? Or not hungry, just tired? Can it divine the difference?
Does a computer know if your boss is mad at you or if they had a fight with their spouse last night, or whatever other reason they may be grumpy?
Can a computer establish relationships… with anything?
How about when a computer goes through puberty? Or menopause? Or a car accident? How do those things affect them?
Don’t bother responding, I think you get the point.
LLMs aren't as good as humans at understanding, but it's not just statistics. The stochastic parrot meme is wrong. The networks create symbolic representations in training, with huge multidimensional correlations between patterns in the data, whether its temporal or semantic. The models "understand" concepts like emotions, text, physics, arbitrary social rules and phenomena, and anything else present in the data and context in the same fundamental way that humans do it. We're just better, with representations a few orders of magnitude higher resolution, much wider redundancy, and multi-million node parallelism with asynchronous operation that silicon can't quite match yet.
In some cases, AI is superhuman, and uses better constructs than humans are capable of, in other cases, it uses hacks and shortcuts in representations, mimics where it falls short, and in some cases fails entirely, and has a suite of failure modes that aren't anywhere in the human taxonomy of operation.
LLMs and AI aren't identical to human cognition, but there's a hell of a lot of overlap, and the stochastic parrot "ItS jUsT sTaTiStIcS!11!!" meme should be regarded as an embarrassing opinion to hold.
"Thinking" models that cycle context and systems of problem solving also don't do it the same way humans think, but overlap in some of the important pieces of how we operate. We are many orders of magnitude beyond old ALICE bots and MEgaHAL markov chains - you'd need computers the size of solar systems to run a markov chain equivalent to the effective equivalent 40B LLM, let alone one of the frontier models, and those performance gains are objectively within the domain of "intelligence." We're pushing the theory and practice of AI and ML squarely into the domain of architectures and behaviors that qualify biological intelligence, and the state of the art models clearly demonstrate their capabilities accordingly.
For any definition of understanding you care to lay down, there's significant overlap between the way human brains do it and the way LLMs do it. LLMs are specifically designed to model constructs from data, and to model the systems that produce the data they're trained on, and the data they model comes from humans and human processes.
You appear to be a proper alchemist, but you can't support an argument of understanding if there is no definition of understanding that isn't circular.
If you want to believe the friendly voice really understands you, we have a word for that, faith.
The skeptic sees the interactions with a chatbot as a statistical game that shows how uninteresting (e.g. predictable) humans and our stupid language are.
There are useful gimmicks coming out like natural language processing, for low risk applications, but this form of AI pseudoscience isn't going to survive, but it will take some time for research to catch up to understanding how to describe the falsehoods of contemporary AI toys
Understanding is the thing that happens when your neurons coalesce into a network of signaling and processing such that it empowers successful prediction of what happens next. This powers things like extrapolation, filling in missing parts of perceived patterns, temporal projection, and modeling hidden variables.
Understanding is the construction of a valid model. In biological brains, it's a vast parallelized network columns and neuron clusters in coordinated asynchronous operation, orchestrated to ingest millions of data points both internal and external, which result in a complex and sophisticated construct comprising the entirety of our subjective experience.
LLMs don't have the subjective experience module, explicitly. They're able to emulate the bits that are relevant to being good at predicting things, so it's possible that every individual token inference process produces a novel "flash" of subjective experience, but absent the explicit construct and a persistent and coherent self construct, it's not mapping the understanding to the larger context of its understanding of its self in the same way humans do it. The only place where the algorithmic qualities needed for subjective experience reside in LLMs is the test-time process slice, and because the weights themselves are unchanged in relation to any novel understanding which arises, there's no imprint left behind by the sensory stream (text, image, audio, etc.) Absent the imprint mechanism, there's no possibility to perpetuate the construct we think of as conscious experience, so for LLMs, there can never be more than individual flashes of subjectivity, and those would be limited to very low resolution correlations a degree or more of separation away from the direct experience of any sensory inputs, whereas in humans the streams are tightly coupled to processing, update in real-time, and persist through the lifetime of the mind.
The pieces being modeled are the ones that are useful. The utility of consciousness has been underexplored; it's possible that it might be useful in coordination and orchestration of the bits and pieces of "minds" that are needed to operate intelligently over arbitrarily long horizon planning, abstract generalization out of distribution, intuitive leaps between domains that only relate across multiple degrees of separation between abstract principles, and so on. It could be that consciousness will arise as an epiphenomenological outcome from the successful linking together of systems that solve the problems LLMs currently face, and the things which overcome the jagged capabilities differential are the things that make persons out of human minds.
It might also be possible to orchestrate and coordinate those capabilities without bringing a new mind along for the ride, which would be ideal. It's probably very important that we figure out what the case is, and not carelessly summon a tortured soul into existence.
I think it’s a disingenuous read to assume original commenter means “understanding” in the literal sense. When we talk about LLM “understanding”, we usually mean it from a practical sense. If you give an input to the computer, and it gives you an expected output, then colloquially the computer “understood” your input.
It could very well be that statistics and tokens is how our brains work at the computational level too. Just that our algorithms have slightly better heuristics due to all those millennia of A/B testing of our ancestors.
What do you mean by “understand”? Do you mean conscious?
Understand just means “parse language” and is highly subjective. If I talk to someone African in Chinese they do not understand me but they are still conscious.
If I talk to an LLM in Chinese it will understand me but that doesn’t mean it is conscious.
If I talk about physics to a kindergartner they will not understand but that doesn’t mean they don’t understand anything.
I have solved more problems with tools like sed and awk, you know, actual tools, more than I’ve entered tokens into an LLM.
Nobody seemed to give a fuck as long as the problem was solved.
This it getting out of hand.