It isn't really type inference. Each closure gets a unique type. Rather it's an automatic decision of what traits (think roughly "superclasses" I guess if you aren't familiar with traits/typeclasses) to implement for that type.
I am not sure how Haskell works but I think what the previous poster meant is that the types get determined at compile time. Closures are akin to macros except you can't see the expanded code.
I've been using /rewind in claude code (the terminal, not using vscode at all) quite a bit recently without issue - if that's the feature you're asking about.
Not discounting at all that you might "hold it" differently and have a different experience. E.g. I basically avoid claude code having any interaction with the VCS at all - and I could easily VCS interaction being a source of bugs with this sort of feature.
I mean double tapping escape, going back up the history, and choosing the “restore conversation and code” option. Sometimes bits of code are restored, but rarely all changes.
It worked when first released but hasn’t for ages now.
An expectation of professionalism, training and written material on software design, providing incentives (like promotions) to not produce crap, etc.
It's not a world where everything produced is immediately verified.
If a human consistently only produced the quality of work Claude Opus 4.5 is capable of I would expect them to be fired from just about any job in short order. Yes, they'd get some stuff done, but they'd do too much damage to be worth it. Of course humans are much more expensive than LLMs to manage so this doesn't mean it can't be a useful tool... just it's not that useful a tool yet.
> with tedious manual checks for specific error conditions
And specifically: Lots of checks for impossible error conditions - often then supplying an incorrect "default value" in the case of those error conditions which would result in completely wrong behavior that would be really hard to debug if a future change ever makes those branches actually reachable.
I always thought that the vast majority of your codebase, the right thing to do with an error is to propagate it. Either blindly, or by wrapping it with a bit of context info.
I don’t know where the LLMs are picking up this paranoid tendency to handle every single error case. It’s worth knowing about the error cases, but it requires a lot more knowledge and reasoning about the current state of the program to think about how they should be handled. Not something you can figure out just by looking at a snippet.
Training data from junior programmers or introductory programming teaching material. No matter how carefully one labels data, the combination of programming’s subjectivity (damaging human labeling and reinforcement’s effectiveness at filtering around this) and the sheer volume of low-experience code in the input corpus makes this condition basically inevitable.
Garbage in garbage out as they say. I will be the first to admit that Claude enables me to do certain things that I simply could not do before without investing a significant amount of time and energy.
At the same time, the amount of anti-patterns the LLM generates is higher than I am able to manage. No Claude.md and Skills.md have not fixed the issue.
Building a production grade system using Claude has been a fools errand for me. Whatever time/energy i save by not writing code - I end up paying back when I read code that I did not write and fixing anti-patterns left and right.
I rationalized by a bit - deflecting by saying this is AI's code not mine. But no - this is my code and it's bad.
> At the same time, the amount of anti-patterns the LLM generates is higher than I am able to manage. No Claude.md and Skills.md have not fixed the issue.
This is starting to drive me insane. I was working on a Rust cli that depends on docker and Opus decided to just… keep the cli going with a warning “Docker is not installed” before jumping into a pile of garbage code that looks like it was written by a lobotomized kangaroo because it tries to use an Option<Docker> everywhere instead of making sure its installed and quitting with an error if it isn’t.
What do I even write in a CLAUDE.md file? The behavior is so stupid I don’t even know how to prompt against it.
> I don’t know where the LLMs are picking up this paranoid tendency to handle every single error case.
Think about it, they have to work in a very limited context window. Like, just the immediate file where the change is taking place, essentially. Having broader knowledge of how the application deals with particular errors (catch them here and wrap? Let them bubble up? Catch and log but don't bubble up?) is outside its purview.
I can hear it now, "well just codify those rules in CLAUDE.md." Yeah but there's always edge cases to the edge cases and you're using English, with all the drawbacks that entails.
I have encoded rules against this in CLAUDE.md. Claude routinely ignores those rules until I ask "how can this branch be reached?" and it responds "it can't. So according to <rule> I should crash instead" and goes and does that.
The answer (as usual) is reinforcement learning. They gave ten idiots some code snippets, and all of them went for the "belt and braces" approach. So now thats all we get, ever. It's like the previous versions that spammed emojis everywhere despite that not being a thing whatsoever in their training data. I don't think they ever fixed that, just put a "spare us the emojis" instruction in the system prompt bandaid.
This is my biggest frustration with the code they generate (but it does make it easy to check if my students have even looked at the generated code). I dont want to fail silently or hard code an error message, it creates a pile of lies to work through for future debugging
Writing bad tests and error handling have been the worst performance part of Claude for me.
In particular writing tests that do nothing, writing tests and then skipping them to resolve test failures, and everybody's favorite: writing a test that greps the source code for a string (which is just insane, how did it get this idea?)
Seriously. Maybe 60% of the time I use claude for tests, the "fix" for the failing tests is also to change the application code so the test passes (in some cases it will want to make massive architecture changes to accomodate the test, even if there's an easy way to adapt the test to better fit the arch). Maybe half the time that's the right thing to do, but the other half the time it is most definitely not. It's a high enough error rate that it borderlines on useful.
Usually you want to fix the code that's failing a test.
The assumption is that your test is right. That's TDD. Then you write your code to conform to the tests. Otherwise what's the point of the tests if you're just trying to rewrite them until they pass?
Not my experience at all when I occasionally try making something purely coded by AI for fun. It starts off fine but the pile of sub-optimal patterns slowly builds towards an unmaintainable mess with tons of duplication of code, and state that somehow needs to be kept in sync. Tests and linters can't test that the code is actually reasonable code...
Doesn't mean it's not a useful tool - if you read and think about the output you can keep it in check. But the "100% of my contributions to Claude Code were written by Claude Code" claim by the creator makes me doubt this is being done.
Using AI doesn’t really change the fact that keeping ones and zeroes in check is like trying to keep quicksand in your hands and shape it.
Shaping of a codebase is the name of the game - this has always been, and still, is difficult. Build something, add to it, refactor, abstraction doesn’t sit right, refactor, semantics change, refactor, etc, etc.
I’m surprised at how so few seem to get this. Working enterprise code, many codebases 10-20 years old could just as well have been produced by LLMs.
We’ve never been good at paying debt and you kind of need a bit of OCD to keep a code base in check. LLM exacerbates a lack of continuous moulding as iterations can be massive and quick.
I was part of a big software development team once and that necessity I felt there, namely, being able to let go of the small details and focusing on the big picture is even more important when using llms.
Everyone has been stressing over losing their job because of AI. I'm genuinely starting to think this will end in 5x more work needing to clean up the mess caused. Who's going to maintain all this generated code?
That would be possible if you had just the spec, but after sometime most of the code will not have been generated through the original spec, but through lots of back and forth for adding features and big fixing. No way to run all that again.
Not that old big non-AI software doesn't have similar maintainability issues (I keep posting this example, but I don't actually want to callthat company out specifically, the problem is widespread: https://news.ycombinator.com/item?id=18442941).
That's why I'm reluctant to complain about the AI code issues too much. The problem of how software is written, on the higher level, the teams, the decisions, the rotating programmers, may be bigger than that of any particular technology or person actually writing the code.
I remember a company where I looked at a contractor job, they wanted me to fix a lot of code they had received from their Eastern European programmers. They complained about them a lot in our meeting. However, after hearing them out I was convinced the problem was not the people generating the code, but the ones above them who failed to provide them with accurate specs and clear guidance, and got surprised at the very end that it did not work as expected.
Similar with AI. It may be hard to disentangle what is project management, what is actually the fault of the AI. I found that you can live with pockets of suboptimal but mostly working code well enough, even adding features and fixing bugs easily, if the overall architecture is solid, and components are well isolated.
That is why I don't worry too much about the complaints here about bad error checks and other small stuff. Even if it is bad, you will have lots of such issues in typical large corporate projects, even with competent people. That's because programmers keep changing, management focuses on features over anything else (usually customers, internal or external, don't pay for code reorg, only for new features). The layers above the low level code are more important in deciding if the project is and remains viable.
From what the commenters say, it seems to me the problem starts much higher than the Claude code, so it is hard to say how much at fault AI generated code actually is IMHO. Whether you have inexperienced juniors or an AI producing code, you need solid project lead and architecture layers above the lines of code first of all.
That's why all the code in my project is generated from the "prompts" (actually just regular block comments + references) and so all of that is checked in.
> Who's going to maintain all this generated code?
Other AI agents, I guess. Call Claude in to clean up code written by Gemini, then ChatGPT to clean up the bugs introduced by Claude, then start the cycle over again.
This is probably tongue in cheek, but I literally do this and it works.
I've had one llm one-shot a codebase. Then I use another one to review (with a pretty explicit prompt). I take that review and feed it to another agent to refactor. Repeat that a bunch of times.
In the cloud with a micro-service architecture this just makes sense. Expose an API and call it a day, who cares what's behind the API as long as it follows the spec.
Most of us in the financial side of this space think so as well. This is why AI Ludditism doesn't make sense - CAT Hydraulic Excavators didn't end manual shovelers, it forced them to upskill.
Similarly, Human-in-the-loop utilization of AI/ML tooling in software development is expected and in fact encouraged.
Any IP that is monetizable and requires significant transformation will continue to see humans-in-the-loop.
Weak hiring in the tech industry is for other reasons (macro changes, crappy/overpriced "talent", foreign subsidies, demanding remote work).
As in the ranking/mental model increasingly being used by management in upper market organizations.
A Coding copilot subscription paired with a competent developer dramatically speeds up product and feature delivery, and also significantly upskills less competent developers.
That said, truly competent developers are few and far between, and the fact that developers in (eg.) Durham or remote are demanding a SF circa 2023 base makes the math to offshore more cost effective - even if the delivered quality is subpar (which isn't neccesarily true), it's good enough to release, and can be refactored at a later date.
What differentiates a "competent" developer from an "average" developer is the learning mindset. Plenty of people on HN kvetch about being forced to learn K8s, Golang, Cloud Primitives, Prompt Engineering, etc or not working in a hub, and then bemoan the job market.
If we are paying you IB Associate level salaries with a fraction of the pedigree and vetting needed to get those roles, upskilling is the least you can do.
We aren't paying mid 6 figure TC for a code monkey - at that point we may as well entirely use AI and an associate at Infosys - we are paying for critical and abstract thinking.
As such, AI in the hands of a truly competent engineer is legitimately transformative.
PS. In the 5 minutes between starting and finishing writing the parent comment https://claude.ai/settings/usage just stopped displaying my quota usage... fun.
Never? Never for plastics either. It seems like there's always going to be a lot of cost in shaping these materials through carefully controlled very high temperature environments. On the plastic side of things just the filament you feed to a printer is a multiple of the cost of the plastic feedstock that goes into making the filament.
You can send off models and get them 3d printed in metal today reasonably affordably today, reasonable as in "considering the time and expertise that go into making the model making a one off part like this isn't breaking the bank" not "competes with mass manufacturing on cost".
Is there any reason to believe there would be any? My understanding of PFAS is that they are used in the application of various coating like things (teflon originally, also since then waterproofing applications, paints, makeup, and firefighting foams)... none of which seem particularly related to making thermoplastics and pushing them through a nozzle into various shapes?
I feel like there's numerous database companies that rewrote an existing database faster/with slightly better features and turned it into a successful product. Just about all of the successful ones really. It's a market where "build a faster horse" has been a successful strategy.
Certainly some of the newer succesful database companies are written in more modern languages (for example go with cockroachdb, go originally and now rust with influxdb) but it's wrong to call these (or really any language) faster than C/C++ just more productive languages to develop reliable software in...
I agree you see there's a lot in the database space I just don't know many have reached escape velocity more often they've raised a bunch of venture capital funding and plateau and then have a big problem
> the description as "the next evolution of sqlite" is offensive
That marketing is really the one thing that keeps me from considering this as a serious option.
To callback to an article a few days ago, it's a signal of dishonest intent [1], and why in the world would I use the alternative built by apparently dishonest people when sqlite is right there and has one of the best track records in the industry.
MVCC is a non-locking algorithm for concurrent writers that the big databases like postgres use (with caveats like aborting some transactions if conflicts would exist). It's not a matter of pushing locks around but allowing multiple threads to operate on the data concurrently.
reply