This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.
I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.
>My "actual job" isn't to write code, but to solve problems
Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.
One ends like the clueless manager type who hasn't touched a computer in 30 years. At which point there will be little reason for the actual job owners to retain their services.
Computer programming on the whole relying on the canned experience of the AI data set, producing more AI churn as ratio of the available training code over time, and plateuing both itself and AI, with the dubious future of reaching Singularity its only hope out of this.
Yet most organizations in existence pay the people “who hasn’t touched a computer in 30 years” quite a large amount of money to continue to solve problems, for some inscrutable reason… =)
> Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.
Using AI myself _and_ managing teams almost exclusively using AI has made this point clear: you shouldn't rely on it as a black box. You can rely on it to write the code, but (for now at least) you should still be deeply involved in the "problem solving" (that is, deciding _how_ to fix the problem).
A good rule of thumb that has worked well for me is to spend at least 20 min refining agent plans for every ~5 min of actual agent dev time. YMMV based on plan scope (obviously this doesn't apply to small fixes, and applies even moreso to larger scopes).
What I find the most "enlightening" and also frightening thing is that I see people that I've worked with for quite some time and who I respected for their knowledge and abilities have started spewing AI nonsense and are switching off their brains.
It's one thing to use AI like you might use a junior dev that does your bidding or rubber duck. It's a whole other ballgame, if you just copy and paste whatever it says as truth.
And regarding that it obviously doesn't apply to small fixes: Oh yes it does! So many times the AI has tried to "cheat" its way out of a situation it's not even funny any longer (compare with yesterday's post about Anthropic's original take home test in which they themselves warn you not to just use AI to solve this as it likes to try and cheat, like just enabling more than one core). It's done this enough times that sometimes I don't trust Claude with an answer I don't fully understand myself well enough yet and dismiss a correct assessment it made as "yet another piece of AI BS".
Let us use an analogy. Many (most?) people can tell a well-written book or story from a mediocre or a terrible one, even though the vast majority of the readers hasn't written any in their lives.
To distinguish good from bad doesn't necessarily require the ability to create.
This analogy serves my argument, as in it, just like "most people" are mere readers (not just they're not writers, they're also nowhere near the level of a competent book editor or a critic), the programmer becomes a mere user of the end program.
Not only would this be a bad way of running a publishing business regarding writing and editing (working on the level of understanding of "most people"), but even in the best case of it being workable, the publisher (or software company) can just fire the specialist and get some average readers/users to give a thumbs up or down to whatever it churns.
I'm not actually sure that's true. Theres plenty of controversy now that books that are popular and beloved now are actually not very well written. I mean I've been hearing this complaint since Twilight was popular.
I haven't read Twilight, but I've read a few beloved and popular books that are atrociously written from a literary standpoint. That does not mean they are not popular for a reason.
One I did read, out of morbid curiosity, is 50 Shades. It's utter dreck in terms of writing quality. It's trite, it's full of clichees, and formulaic to the extreme (and incidentally a repurposed Twilight fanfic; if you wonder about the weird references to hunger, there's the reason), but if you look at why it became popular, you might notice that it is extremely well crafted for its niche.
If you don't want a "billionaire romance" (yes, this is a well defined niche; there's a reason Grey is described as one) melded with the "danger" of vampire-transformed-into-traumatised-man-with-a-dark-side, it's easy to tear it apart (I couldn't get all the way through it - it was awful along the axes I care about), but as a study in flawlessly merging two niches popular with one of the biggest book-buying demographics that have extremely predictable and rigid expectations, it's really well executed.
I'd struggle to accept it as art, but as a particular kind of craft, it is a masterpiece even if I dislike the craft in question.
You will undoubtedly find poorly executed dreck that is popular just because it happened to strike a chord out of sheer luck as well, but a lot of the time I tend to realise that if I look at something I dislike and ask what made it resonate with its audience, it turns out that a lot of it resonated with its audience because it was crafted to hit all the notes that specific audience likes.
At the same time, it's never been the case that great pieces of literature was assured doing well on release. Moby Dick, for example, only sold 3,000 copies during Melville's lifetime (makes me feel a lot better about the sales of my own novels, though I don't hold out any hope of posthumous popularity) and was one of his least successful novels when it was first published. A lot of the most popular media of the time is long since forgotten for good reason. And so we end up with a survivorship bias towards the past, where we see centuries of great classics that have stood the test of time and none of the dreck, and measure them up against dreck and art alike of contemporary media.
I have very little knowledge of how transistors shuffle ones and zeros out of registers. That doesn't prevent me from using them to solve a problem.
Computing is always abstractions. We moved from plugging to assembly, then to c, then we had languages that managed memory for you -- how on earth can you understand what the compiler should be doing or what it is doing if you don't deal with explicit pointers on a day by day basis.
We bring in libraries when we need code. We don't run our own database, we use something else, and we just do "apt-get install mysql", but then we moved onto "docker run" or perhaps we invoke it with "aws cli". Who knows what teraform actually does when we declare we want a resource.
I was thinking the other day how abstractions like AWS or Docker are similar to LLM. With AWS you just click a couple of buttons and you have a data store, you don't know how to build a database from scratch, you don't need one. Of course "to build a database from scratch you must first create the universe".
Some people still hand-craft assembly code to great benefit, but that vast majority don't need to to solve problems, and they can't.
This musing was in the context of what do we do if/when aws data centres are not available. Our staff are generally incapable of working in a non-aws environment. Something that we have deliberately cultivated for years. AWS outputs are one option, or perhaps we should run a non-aws stack that we fully own and control.
Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java. Is is different from outsourcing? You concentrate on your core competency, which is understanding the problem and delivering a solution, not managing memory or running databases. This comes with risk -- all outsourcing does, and if outsourcing to a single supplier you don't and can't understand is acceptable risk, then is relying on LLMs not?
There's never been a case in my long programming career so far where knowing the low level details has not benefited me. The level of value varies but it is always positive.
When you use LLMs to write all your code you will lose (or never learn) the details. Your decision making will not be as good.
I've seen cases in my career where people knowing the low level things is actually a hindrance.
They start to fight the system, trying to optimise things by hand for an extra 2% of performance while adding 100% of extra maintenance cost because nobody understands their hand-crafted assembler or C code.
There will always be a place for people who do that, but in the modern world in most cases it's cheaper to just throw more money at hardware instead of spending time optimising - if you control the hardware.
If things run on customer's devices, then you need the low level gurus again.
I think there is a big difference. You could and should have both knowledge. This applies to whether you're a lowly programmer or a CEO. Knowing the details will always help you make better decisions.
I think it's a lot like outsourcing. And, expected quality of outsourcing aside, more importantly, I don't see outsourcing as the next step up on the ladder of programming abstraction. It's having someone else do the programming for you (at the same abstraction level).
wait, did you see the part where the person you are replying to said that writing the code themself was essential to correctly solving the problem?
Because they didn't understand the architecture or the domain models otherwise.
Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.
I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?
If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?
Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.
I saw that part and I disagreed with the very notion, hence why I wrote what I did.
> Because they didn't understand the architecture or the domain models otherwise.
My point is that requiring or expecting an in-depth understanding of all the algorithms you rely on is not a productive use of developer time, because outside narrow niches it is not what we're being paid for.
It is also not something the vast majority of us do now, or have done for several decades. I started with assembler, but most developers have never-ever worked less than a couple of abstractions up, often more, and leaned heavily on heaps of code they do not understand because it is not necessary.
Sometimes it is. But for the vast majority of us pretending it is necessary all the time or even much of the time is a folly.
> I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?
Growing the people under me involves teaching them to solve problems, and already long before AI that typically involved teaching developers to stop obsessing over details with low ROI for the work they were actually doing in favour of understanding and solving the problems of the business. Often that meant making them draw a line between what actually served the needs they were paid to solve rather than the ones that were personally fun to them (I've been guilty of diving into complex low-level problems I find fun rather than what solves the highest ROI problems too - ask me about my compilers, my editor, my terminal - I'm excellent at yak shaving, but I work hard to keep that away from my work)
> If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?
For AI use: Tests. Tests. More tests. And, yes, skills and agents. Not primarily even to verify that it understands the specs, but to create harnesses to run them in agent loops without having to babysit them every step of the way. If you use AI and spend your time babysitting them, you've become a glorified assistant to the machine.
And nobody is talking about verifying if the AI bubble sort is correct or not - but recognizing that if the AI is implementing it’s own bubble sort, you’re waaaay out in left field.
Especially if it’s doing it inline somewhere.
The underlying issue with AI slop, is that it’s harder to recognize unless you look closely, and then you realize the whole thing is bullshit.
Only if you don't constrain the tests. If you use agents adversarially in generating test cases, tests and review of results, you can get robust and tight test cases.
Unless you're in research, most of what we do in our day jobs is boilerplate. Using these tools is not yet foolproof, but with some experience and experimentation you can get excellent results.
I meant this more in the sense of there is nothing new under the sun, and that LLMs have been trained on essentially everything that's available online "under the sun". Sure, there are new SaaS ideas every so often, but the software to produce the idea is rarely that novel (in that you can squint and figure out roughly how it works without thinking too hard), and is in that sense boilerplate.
hahaha, oh boy. that is roughly as useful or accurate as saying that all machines are just combinations of other machines, and hence there is nothing unique about any machine.
Vertical CNC mills and CNC lathes are, obviously, different machines with different use cases. But if you compare within the categories, the designs are almost all conceptually the same.
So, what about outside of some set of categories? Well, generally, no such thing exists: new ideas are extremely rare.
Anyone who truly enjoys entering code character for character, refusing to use refactoring tools (e.g. rename symbol), and/or not using AI assistance should feel free to do so.
I, on the other hand, want to concern myself with the end product, which is a matter of knowing what to build and how to build it. There’s nothing about AI assistance that entails that one isn’t in the driver’s seat wrt algorithm design/choices, database schema design, using SIMD where possible, understanding and implementing protocols (whether HTTP or CMSIS-DAP for debugging microcontrollers over USB JTAG probe), etc, etc.
AI helps me write exactly what I would write without it, but in a fraction of the time. Of course, when the rare novel thing comes up, I either need to coach the LLM, or step in and write that part myself.
But, as a Staff Engineer, this is no different than what I already do with my human peers: I describe what needs doing and how it should be done, delegate that work to N other less senior people, provide coaching when something doesn’t meet my expectations, and I personally solve the problems that no one else has a chance of beginning to solve if they spent the next year or two solely focused on it.
Could I solve any one of those individual, delegated tasks faster if I did it myself? Absolutely. But could I achieve the same progress, in aggregate, as a legion of less experienced developers working in parallel? No.
LLM usage is like having an army of Juniors. If the result is crap, that’s on the user for their poor management and/or lack of good judgement in assessing the results, much like how it is my failing if a project I lead as a Staff Engineer is a flop.
> And nobody is talking about verifying if the AI bubble sort is correct or not - but recognizing that if the AI is implementing it’s own bubble sort, you’re waaaay out in left field.
Verifying time and space complexity is part of what your tests should cover.
But this is also a funny example - I'm willing to bet the average AI model today can write a far better sort than the vast majority of software developers, and is far more capable of analyzing time and space complexity than the average developer.
In fact, I just did a quick test with Claude, and asked for a simple sort that took into account time and space complexity, and "of course" it knows that it's well established that pure quicksort is suboptimal for a general-purpose sort, and gave me a simple hybrid sort based on insertion sort for small arrays, heapsort fallback to stop pathological recursion, and a decently optimized quicksort - this won't beat e.g. timsort on typical data, but it's a good tradeoff between "simple" (quicksort can be written in 2-20 lines of code or so depending on language and how much performance you're willing to sacrifice for simplicity) and addressing the time/space complexity constraints. It's also close to a variant that incidentally was covered in an article in DDJ ca. 30 years ago because most developers didn't know how to, and were still writing stupidly bad sorts manually instead of relying on an optimized library. Fewer developers knows how to write good sorts today. And that's not bad - it's a result of not needing to think at that level of abstraction most of the time any more.
And this is also a great illustration of the problem: Even great developers often have big blind spots, where AI will draw onresults they aren't even aware of. Truly great developers will be aware of their blind spots and know when to research, but most developers are not great.
But a human developer, even a not so great one, might know something about the characteristics of the actual data a particular program is expected to encounter that is more efficient than this AI-coded hybrid sort for this particular application. This is assuming the AI can't deduce the characteristics of the expected data from the specs, even if a particular time and space complexity is mandated.
I encountered something like this recently. I had to replace an exact data comparison operation (using a simple memcmp) with a function that would compare data and allow differences within a specified tolerance. The AI generated beautiful code using chunking and all kinds of bit twiddling that I don't understand.
But what it couldn't know was that most of the time the two data ranges would match exactly, thus taking the slowest path through the comparison by comparing every chunk in the two ranges. I had to stick a memcmp early in the function to exit early for the most common case, because it only occurred to me during profiling that most of the time the data doesn't change. There was no way I could have figured this out early enough to put it in a spec for an AI.
> But a human developer, even a not so great one, might know something about the characteristics of the actual data a particular program is expected to encounter that is more efficient than this AI-coded hybrid sort for this particular application.
Sure. But then that belongs in a test case that 1) documents the assumptions, 2) demonstrates if a specialised solution actually improves on the naive implementation, and 3) will catch regressions if/when those assumptions no longer holds.
In my experience in that specific field is that odds are the human are likely making incorrect assumptions, very occasionally are not, and having a proper test harness to benchmark this is essential to validate the assumptions whether or not the human or an AI does the implementation (and not least in case the characteristics of the data end up changing over time)
>There was no way I could have figured this out early enough to put it in a spec for an AI.
This is an odd statement to me. You act like the AI can only write the application once and can never look at any other data to improve the application again.
>only occurred to me during profiling
At least to me this seems like something that is at far more risk of being automated then general application design in the first place.
Have the AI design the app. Pass it off to CI/CD testing and compile it. Send to a profiling step. AI profile analysis. Hot point identification. Return to AI to reiterate. Repeat.
> At least to me this seems like something that is at far more risk of being automated then general application design in the first place.
This function is a small part of a larger application with research components that are not AI-solvable at the moment. Of course a standalone function could have been optimised with AI profiling, but that's not the context here.
If your product has code on it that can only be understood and worked on by the person that wrote it, then your code is too complex and underdocumented and/or doesn't have enough test coverage.
Your time would be better spent, in a permanent code base, trying to get that LLM to understand something than it would be trying to understand the thing yourself. It might be the case that you need to understand the thing more thoroughly yourself so you can explain it to the LLM, and it might be the case that you need to write some code so that you can understand it and explain it, but eventually the LLM needs to get it based on the code comments and examples and tests.
> My "actual job" isn't to write code, but to solve problems.
Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.
> That has increasingly shifted to "just" reviewing code
It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.
> often a benefit to having a human have an understanding of the concrete details of the system
Further elaborating from my experience.
1. I think we're in the early stages, where agents are useful because we still know enough to coach well - knowledge inertia.
2. I routinely make the mistake of allowing too much autonomy, and will have to spend time cleaning up poor design choices that were either inserted by the agent, or were forced upon it because I had lost lock on the implementation details (usually both in a causal loop!)
I just have a policy of moving slowly and carefully now through the critical code, vs letting the agent steer. They have overindexed on passing tests and "clean code", producing things that cause subtle errors time and time again in a large codebase.
> burn the time to understand it.
It seems to me to be self-evident that writing produces better understanding than reading. In fact, when I would try to understand a difficult codebase, it often meant that probing+rewriting produced a better understanding than reading, even if those changes were never kept.
It's like any other muscle, if you don't exercise it, you will lose it.
It's important that when you solve problems by writing code, you go through all the use cases of your solution. In my experience, just reading the code given by someone else (either a human or machine) is not enough and you end up evaluating perhaps the main use cases and the style. Most of the times you will find gaps while writing the code yourself.
> It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.
This is true whether an AI wrote the code or a co-worker, except the AI is always on hand to answer detailed questions about the code, do detailed analysis, and run extensive tests to validate assumptions.
It is very rarely productive any more to dig into low level code manually.
This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.
This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.
> This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.
This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.
This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.
And those trade-offs can only pay off if the extra food produced can be utilized. If the farm is producing more food than can be preserved and/or distributed, then the surplus is deadweight.
This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.
This is the classic mistake all AI hypemen make by assuming code is an asset, like crops. Code is a liability and you must produce as little of it as possible to solve your problem.
As an "AI hypeman" I 100% agree that code is a liability, which is exactly why I relish being able to increasingly treat code as disposable or even unnecessary for projects that'd before require a multiple developers a huge amount of time to produce a mountain of code.
I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.
I feel like we are at the crescendo point with "AI". Happens with every tech pushed here. 3DTV? You have those people who will shout you down and say every movie from now on will be 3D. Oh yeah? Hmmm... Or the people who see Apple's goggles and yell that everyone will be wearing them and that's just going to be the new norm now. Oh yeah? Hmmm...
Truth is, for "AI" to get markedly better than it is now (0) will take vastly more money than anyone is willing to put into it.
(0) Markedly, meaning it will truly take over the majority of dev (and other "thought worker") roles.
"Airplanes are only 5 years away, just like 10 years ago" --Some guy in 1891.
Never use your phrase to say something is impossible. I mean there are driverless Waymo's on the street in my area so your statement is already partially incorrect.
Nobody is saying it isn't possible. Just saying nobody wants to pay as much money as it's going to take to get there. At some point investors will say, meh, good 'nuff.
Just about a week ago I launched a 100% AI generated project that shortcircuits a bunch of manual tasks. What before took 3+ weeks of manual work to produce, now takes us 1-2 days to verify instead. It generates revenue. It solved the problem of taking a workflow that was barely profitable and cutting costs by more than 90%. Half the remaining time is ongoing process optimization - we hope to fully automate away the reaming 1-2 days.
This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".
I fully agree that some places will drown in a deluge of AI generated code of poor quality, but that is an operator fault. In fact, one of my current clients retained me specifically to clean up after someone who dove head first into "AI first" without an understanding of proper guardrails.
All employees solve problems. Developers have benefited from the special techniques they have learned to solve problems. If these techniques are obsolete, or are largely replaced by minding a massive machine, the character of the work, the pay for performing it, and social position of those who perform it will change.
> My "actual job" isn't to write code, but to solve problems.
You're like 836453th person to say this. It's not untrue, but many of us will take writing over reviewing any day. Reviewing is like the worst part of the job.
I use AI heavily to review the code too, and it makes it far simpler.
E.g. "show me why <this assumption that is necessary for the code I'm currently staring at> holds" makes it far more pleasant to do reviews. AI code review tooling works well to reduce that burden. Even more so when you have that AI cod review tooling running as part of your agent loop first before you even look at a delivery.
"prove X" is another one - if it can't find a test case that already proves X and resorts to writing code to prove X, you probably need more tests, and now you have one,.
Then your job has turned into designing solutions, and asking a (sometimes unreliable) LLM to make them for you. If you keep at it, soon you'll accumulate enough cognitive debt to become a fossil, knowing what has to be done, but not quite how it is done.
And really where is your moat? Why pay for a senior when a junior can prompt an LLM all the same? People are acting like its juniors who are going to be out of work like companies are going to just keep paying seniors for their now obsolete skills.
Where do you think your moat is if you insist on sticking at a level of abstraction where the AI keeps eating into your job, instead of stepping up and handling architecture, systems design, etc. at a higher level?
Do you know how to write the code you write in assembler instead of a higher level language? How many of your peers do?
Most "know what has to be done, but not quite how it is done". This is just another level of abstraction.
I learnt the lesson 30+ years ago that while it was (and still occasionally is) useful to understand the principles of assembly, it had become useless to write assembly outside of a few narrow niches. A decade later I moved from C and C++ to higher level languages again.
Moving up the abstraction levels is learning leverage.
I deliver far more now - with or without AI - than I did when I wrote assembler, or C for that matter. I deliver more again with AI than without.
> Do you know how to write the code you write in assembler instead of a higher level language?
Actually, I do, but then you could ask me if I can develop in machine language, and I'll hate to reply no. The abstraction is not the point, but the isolation from the core task. If you're a brilliant fashion designer, you even know how to sew, but outsource your work to an Asian sweatshop, you can never be sure it's well done until you see the result.
Using an abstraction is not the same as using a black box.
> I deliver far more now - with or without AI - than I did when I wrote assembler, or C for that matter. I deliver more again with AI than without.
> That's what matters.
Also, in some disciplines, quality sometimes matters more than quantity.
A lot of people, who are on their way to doing truly professional work, have this epiphany.
The place you need to get to is understanding that you are being asked to ensure a problem is solved.
You’re only causing a larger problem by “solving” issues without both becoming an SME and ensuring that knowledge can be held by the organization, at all levels that the problem affects (onboarding, staff, project management, security, finance, auditors, c-suite.)
My "actual job" is a designer, not a career engineer, so for me code has always been how I ship. AI makes that separation clearer now. I just recently wrote about this.[0]
But I think the cognitive debt framing is useful: reading and approving code is not the same as building the mental model you get from writing, probing, and breaking things yourself. So the win (more time on problem solving) only holds if you're still intentionally doing enough of the concrete work to stay anchored in the system.
That said, if you're someone like me, I don't always need to fully master everything, but I do need to stay close enough to reality that I'm not shipping guesses.
Some of the biggest improvements I've made in the clarity and typesafety of the code I write came from seeing the weak points while slogging through writing code, and choosing or writing better libraries to solve certain problems. If everyone stops writing code I can only imagine quality will stagnate
for example, I got fed up with the old form library we were using because it wasn't capable of checking field names/paths and field value types at compile time and I kept having unexpected runtime errors. I wrote a replacement form library that can deeply typecheck all of that stuff.
If I had turned an AI loose against the original codebase, I think it would have just churned away copying the existing patterns and debugging any runtime errors that result. I don't think an AI would have ever voluntarily told me "this form library is costing time and effort, we should replace it with such and such instead"
You're right that a dev's job is to solve problems. However, one loses a lot of that if one doesn't think in computerese - and only reading code isn't enough. One has to write code to understand code. So for one to do one's _actual_ job, they cannot depend solely on "AI" to write all the code.
We used to say that about people who wrote in C instead of assembler. Then we used to say that (any many still do) about people who opted for "scripting languages" over "systems languages".
It's "true" in a sense. It helps. But it is also largely irrelevant for most of us, in that most of us are writing code you can learn to read and write in a tiny proportion of the time we spend in working life. The notion that you need to keep spending more than a tiny fraction of your time writing code in order to understand enough to be able to solve business problem will seem increasingly quaint.
> The notion that you need to keep spending more than a tiny fraction of your time writing code in order to understand enough to be able to solve business problem will seem increasingly quaint.
Completely disagree. Reading books doesn't make you an author. Reading books AND writing books makes you an author.
The entire point is we increasingly don't need to be authors.
Most of us aren't paid to be authors in your analogy.
(Which is good, because outside of your analogy, most authors are paid peanuts, and most of those of us who do write do so because we enjoy it, not as a job)
But even if our jobs were to be authors, while I learned some things about writing books from writing the novels I have written and published, I learned far more from being a voracious reader for decades.
I probably needed both, and I'm sure I'd improve as a writer past what I could from just reading by writing more, I think your analogy if anything is a perfect fit for my point that we don't need to spend more than a tiny proportion our time writing to be competent at it (I won't claim great).
Many of us will probably keep doing it for fun, but it will be increasingly hard to justify "manual coding" at work.
Exactly this. The shift from "writing code" to "reviewing code and focusing on architecture" is the natural evolution. Every abstraction layer in computing history freed us to think at higher levels - assembler to C, C to Python, and now Python to "describe what you want."
The people framing this as "cognitive debt" are measuring the wrong thing. You're not losing the ability to think - you're shifting what you think about. That's not a bug, it's the whole point.
The problem is that how do you review code if you don't know what it is supposed to look like? Creativity is not only in the problem solving step but also when implementing it, and letting an LLM do most of it is incredibly dangerous for the future, more so on juniors are gaining experience this way. The software quality will be much worse, and the churn even higher, and I will be in a farm with my chickens
If you spend all your time on that, you might actually lose the ability to actually do it. I find a lot of "non core" tasks are pretty important for skill building and maintenance.
I sympathise, in as much as I love writing code too, but I increasingly restrict that to my personal projects. It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.
> It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.
In practice, this isn't bearing out at all though both among my peers and with peers in other tech companies. Just making a blanket statement like this adds nothing to the conversation.
if you're a consultant/contractor that's bid a fixed amount for a job: you're incentivised to slop out as much as possible to hit the complete the contract as quickly as possible
and then if you do a particularly bad job then you'll be probably kept on to fix up the problems
vs. an permanent employee that is incentivised to do the job well, sign it off and move onto the next task
You're making flawed assumptions you have no basis for.
Most of my work is on projects I have a long term vested interest in.
I care far more about maximally leveraging LLMs for the projects I have a vested interest in - if my clients don't want to, that's their business.
Most of my LLM usage directly affects my personal finances in terms of the ROI my non-consulting projects generate - I have far more incentives to do the job well than a permanent employee whose work does not have an immediate effect on their income.
> My "actual job" isn't to write code, but to solve problems.
Air quotes and more and more general words. The perfect mercenari’s tools.
The buck stops somewhere for most of us. We have jobs, we are compelled to do them. But we care about how it is done. We care whether doing it in a certain will give us short term advantages but hinder us in the long term. We care if the process feels good or bad. We care if it feels like we are in control of the process or if we are just swimming in a turbulent sea. We care about how predictable the tools we use. Whether we can guess that something takes a month and not be off by weeks.
We might say that we are the perfect pragmatists (mercenaris); that we only care about the most general description of what-is-to-be-done that is acceptable to the audience, like solving business problems, or solving technical problems, or in the end—as the pragmatist sheds all meaning from his burdensome vessel—just solving problems. But most of us got into some trade, or hobby, or profession, because we did concrete things that we concretely liked. And switching from keyboards to voice dictation might not change that. But seemingly upending the whole process might.
It might. Or it may not. Certainly could go in more than one direction. But to people who are not perfect mercenaries or business hedonists[1] these are actual problems or concerns. Not nonsense to be dismissed with some “actual job” quip, which itself is devoid of meaning.
I'm in the same boat. There's a lot of things I don't know and using these models help give direction and narrow focus towards solutions I didn't know about previously. I augment my knowledge, not replace.
Some people learn from rote memorization, some people learn through hands on experience. Some people have "ADHD brains". Some people are on the spectrum. If you visit Wikipedia and check out Learning Styles, there's like eight different suggested models, and even those are criticized extensively.
It seems a sort of parochial universalism has coalesced, but people should keep in mind we don't all learn the same.
ETA: I'd also like to say learning from LLMs are vastly similar, and some ways more useful, than finding blogs on a subject. A lot of time, say for Linux, you'll find instructions that even if you perform them to a tee, something goes pear shaped, because of tiny environment variables or a single package update changes things. Even Photoshop tutorials are not free of this madness. I'm used to mostly correct but just this side of incorrect instructions. LLMs are no different in a lot of ways. At least with them I can tailor my experience to just what I'm trying to do and spend time correcting that versus loading up a YT video trying to understand why X doesn't work. But I can understand if people don't get the same value as I do.
That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.
If the computer was the bicycle for the mind, then perhaps AI is the electric scooter for the mind? Gets you there, but doesn't necessarily help build the best healthy habits.
Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.
Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.
It's a servant, in the Claude Code mode of operation.
If you outsource a skill consistently, you will be engaging less with that skill. Depending on the skill, this may be acceptable, or a desirable tradeoff.
For example, using a very fast LLM to interactively make small edits to a program (a few lines at a time), outsources the work of typing, remembering stdlib names and parameter order, etc.
This way of working is more akin to power armor, where you are still continuously directing it, just with each of your intentions manifesting more rapidly (and perhaps with less precision, though it seems perfectly manageable if you keep the edit size small enough).
Whereas "just go build me this thing" and then you make a coffee is qualitatively very different, at that point you're more like a manager than a programmer.
> then perhaps AI is the electric scooter for the mind
I have a whole half-written blog post about how LLMs are the cars of the mind. Massive externalities, has to be forced on people, leads to cognitive/health issues instead of improving cognition and health.
I’ve also noticed that I’m less effective at research, but I think it’s our tools becoming less effective over time. Boolean doesn’t really work, and I’ve noticed that really niche things don’t surface in the search results (on Bing) even when I know the website exists. Just like LLMs seem lazy sometimes, search similarly feels lazy occasionally.
This is the typical arrogance of developers not seeing the value in anything but the coding. I've been hands on for 45 years, but also spend 25 of those dealing with architecture and larger systems design. The actual programming is by far the simplest part of designing a large system. Outsourcing it is only dumbing you down if you don't spend the time it frees up to move up the value chain.
Talk about arrogance, Mr 45 years of experience. Ever thought that there might be people under skyscraper that is your ego? I’m pretty sure majority of tech workers aren’t even 45 years old. Where are they supposed to learn good design when slop takes over? You’ve spent at least 20 years JUST programming, assuming you’ve never touched large scale design before last 25 years. Simplest part my ass.
> Ever thought that there might be people under skyscraper that is your ego?
I do, which is exactly why I found the presumption that not spending your time doing the coding is equivalent to a disability both gross and arrogant.
> Where are they supposed to learn good design when slop takes over?
You're not learning good architecture and systems design from code. You learn good architecture and systems design from doing architecture and systems design. It's a very different discipline.
While knowing how to code can be helpful, and can even be important in narrow niches, it is a very minor part of understanding good architecture.
And, yes, I stand by the claim the coding is by far the simplest part, on the basis of having done both for longer than most developers have been doing either.
> And, yes, I stand by the claim the coding is by far the simplest part, on the basis of having done both for longer than most developers have been doing either.
"I reckon this is even the case when using it as an interactive encyclopedia".
Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.
OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.
I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!
I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.
If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.
But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.
A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.
I'd say the new problem is knowing when understanding is important and where it's okay to delegate.
It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.
My argument is that understanding is always important, even if you delegate. But perhaps you mean sometimes a lower degree of understanding may be okay, which may be true, but I’d be cautious on that front. AI coding is a very leaky abstraction.
We already see the damage of a lack of understanding when we have to work with old codebases. These behemoths can become very difficult to work in over time as the people who wrote it leave, and new people don’t have the same understanding to make good effective changes. This slows down progress tremendously.
Fundamentally, code changes you make without understanding them immediately become legacy code. You really don’t want too much of that to pile up.
I'm writing a blog post on this very thing actually.
Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.
Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.
The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.
I'd briefly come across Elk, but couldn't tell how it was better than what I was using. The examples I could find all showed far simpler graphs than what we had, and nothing that seemed to address the problems we had, but maybe I should give it another look, because I've kinda lost faith that dagre is going to do what we need.
If I can explain briefly what our issue is: we've got a really complex graph, and need to show it in a way that makes it easy to understand. That by itself might be a lost cause already, but we need it fixed. The problem is that our graph has cycles, and dagre is designed for DAGs; directed acyclic graphs. Fortunately it has a step that removes cycles, but it does that fairly randomly, and that can sometimes dramatically change the shape of the graph by creating unintentional start or end nodes.
I had a way to fix that, but even with that, it's still really hard to understand the graph. We need to cut it up into parts, group nodes together based on shared properties, and that's not something dagre does at all. I'm currently looking into cola with its constraints. But I'll take another look at elk.
>but I can feel my brain engaging less with the problem than I'm used to
With me it has been the opposite, perhaps because I was anti-AI before and because I know it is gonna make mistake.
My most intense AI usage:
Goal: Homelab is my hobby and I wanted to setup a private tracker torrent via Proton VPN, fully.
I am used to tools such Ansible and Linux operating system, but there were like 3 different tools to manage the torrents, plus a bunch of firewall rules so in case Provon VPN drops, everything stops working instead of using my real IP Address snitching me to my ISP.
I wanted everything to be as automated as possible, Ansible, so if everything catches on fire, I can run Ansible playbook and bring everything back online.
The whole setup took me 3 nights and I couldn't stop thinking about it during the day, like how can I solve this or that, the solution Perplexity/ChatGPT gave me broke something else so how could I solve that, etc.
I am using these tools more like a Google Search alternative than AI per se, I can see when it made mistakes because I know what I am asking it to help me with, homelab.
I don't wanna to just copy and paste, and ironically, I have learned a ton about Promox ( where I run my virtual containers and virtual machine ).
I always say that I don't wanna just answers, show me how did you get to that conclusion so I can learn it myself.
As long as you are aware that this is a tool and that it makes mistakes the same way as somebody's reply in any forum, you are good and should still feel motivated.
If you are using AI tools just for copy/paste expecting things to work without caring to understand what is actually happening (companies and IT teams worldwide), then you have a big problem.
Would be curious about this too. It’s a mental shift to go from understanding everything about the code, to trusting someone else understands everything and we just make decisions.
- relief.. i can let the llm relay me while i rest a bit and resume with some progress done
- inspiration.. the llm follows my ideas and open weird roads i was barely dreaming of (like asking random 'what if we try to abstract the issue even more' and get actual creative ideas)
but then there day to day operations and deadlines
When I used Copilot autocomplete more I noticed myself slipping a bit when it comes to framework and syntax particulars so I instituted a moratorium on it on Fridays to prevent this.
Claude Code seems to be a much better paradigm. For novel implementations I write code manually while asking it questions. For things that I'm prototyping I babysit it closely and constantly catch it doing things that I don't want it to do. I ask it questions about why it built things certain ways and 80% of the time it doesn't have a good answer and redoes it the way that I want. This takes a great deal of cognitive engagement.
Rule nombre [sic] uno: Never anthropomorphize the LLM. It's a giant pattern-matching machine. A useful one, but still just a machine. Do not let it think for you because it can't.
4b-model take. LLMs are far more intelligent than you give them credit for. Every new layer of abstraction allows us to develop software better and faster. People constantly ragged on OOP yet it is the foundation of modern computing. People while about "bloat" but continue to buy more RAM. Compilers are a black box and meaningfully inhibit your ability to write asm but these days nobody cares. I see LLMs as the next logical evolution in computing abstractions.
I see the opposite effect with AI: I quickly find some error that it has made, because it always makes errors in my field, and that keeps me from disengaging with the problem, because it helps define what can be wrong. I mainly use AI like I used to use my blog, for writing out my ideas in prose that I think is comprehensible and organized. Neither AI nor my old blog ever solved a problem for me, but they help me figure out how to talk about problems. I'll solve them on my own, but being able to describe a problem well is an important step in that.
I think that's still in line with what I mean. Letting the AI solve the problem doesn't work, but I've had several times that simply trying to explain the problem to the AI helped me solve it. Sometimes it's not an interactive encyclopedia, but an interactive rubber duck. That works too.
Don't outsource the thinking to the AI, is what I mean. Don't trust it, but use it to talk to, to shape your thoughts, and to provide information and even ideas. But not the solution, because that has never worked for me for any non-trivial problem.
Funny - that's the hard part for me. I have yet to figure out what to use it for, since it seems to take longer than any other method of performing my tasks. Especially with regards to verifying for correctness, which in most cases seems to take as long or longer than just having done it myself, knowing I did it correctly.
Similarly I leave Cursor's AI in "ask" mode. It puts code there, leaving me to grab what I need and integrate myself. This forces me to look closely at code and prevents the "runaway" feeling where AI does too much and you're feeling left behind in your own damn project. It's not AI chat causing cognitive debt it's Agents!
I just went through an eerily similar situation where the coding agent was able to muster some pretty advanced math (information geometry) to solve my problem at hand.
But while I was able to understand it enough to steer the conversation, I was utterly unable to make any meaningful change to the code or grasp what it was doing. Unfortunately, unlike in the case you described, chatting with the LLM didn’t cut it as the domain is challenging enough. I’m on a rabbit hunt now for days, picking up the math foundations and writing the code at a slower pace albeit one I can keep up with.
And to be honest it’s incredibly fun. Applied math with a smart, dedicated tutor and the ability to immediately see results and build your intuition is miles ahead of my memories back in formative years.
I think a good rule of thumb is, only have AI write some code when you know exactly what it should look like and are just too lazy to type it out, or, if it is code that you would have otherwise just pulled down from some open source library and not written yourself anyway.
It reads like an anti-ad for both. "I didn't use the Copilot IDE because I lack control over the context provided" and "I used Copilot 365 because it for sure doesn't have any context of anything because connecting things to it is hard/expensive".
> a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning.
I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.
But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.
This is a technical forum, isn't pretentious name dropping kind of what we do?
Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.
And yes, I did use ChatGPT to familiarize myself with these concepts briefly.
I think many are not doing anything like this so to the person who is not interested in learning anything, technical details like this sound like pretentious name dropping because that is how they relate to the world.
Everything to them is a social media post for likes.
I have explored all kinds of graph layouts in various network science context via LLMs and guess what? I don't know anything much about graph theory beyond G = (V,E). I am not really interested either. I am interested in what I can do with and learn from G. Everything on the right of the equals sign Gemini is already beyond my ability. I am just not that smart.
The standard narrative on this board seems to be something akin to having to master all volumes of Knuth before you can even think to write a React CRUD app. Ironic since I imagine so many learned programming by just programming.
I know I don't think as hard when using an LLM. Maybe that is a problem for people with 25 more IQ points than me. If I had 25 more IQ points maybe I could figure out stuff without the LLM. That was not the hand I was dealt though.
I get the feeling there is immense intellectual hubris on this forum that when something like this comes up, it is a dog whistle for these delusional Erdos in their own mind people to come out of the wood work to tell you how LLMs can't help you with graph theory.
If that wasn't the case there would be vastly more interesting discussion on this forum instead of ad nauseam discussion on how bad LLMs are.
I learn new things everyday from Gemini and basically nothing reading this forum.
for many people here knowing various algorithms, data structures, and how to code really well and really fast are the only things that differentiates them from everyone else and largely define their identity. Now all of that value, status, and exclusivity is significantly threatened.
I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.
So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.
So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.