ACC generally has a 3-4 second time interval that it permits between you and the car in front of you. I live in SoCal, so a lot of my driving is on very aggressive routes. The 4-second gap is mechanically safe but it's practically unusable because it creates a void large enough to invite other cars to lane change in front of me. So when that car merges in, the ACC detects a violation of the safe braking distance and decelerates to reestablish the gap. I call it the "cut me off" loop when we're on trips.
And before anyone suggests that I start tinkering around with the settings, I have adjusted it and the damned thing just resets itself constantly.
The beauty of ACC is it lets your disengage mentally. You can be aggro if you want to with it on, but I found it's just not emotionally worth it to get mad at being cut off anymore in a car with ACC. ACC just handles going forwards and I'm not having to touch gas nor brake. If I'm not touching either, I don't have to panic react to getting cut-off, just make sure the ACC is handling it, and if that's all I need to check, vs slam on the brakes, then eh.
Ah yes, I never used that. My car isn't very recent (about 10 years old now), and I drive very little (about 2-3k per year; I take the train to go anywhere far) because I hate it.
But the adaptive part would make it much more useful indeed.
However, something that is extremely annoying in France is that speed limits tend to change very often and abruptly.
I just think that trying to solve the problem solely at the car level is always going to have too many limitations...
In the UK, it took me half an hour and 30£ to open a Ltd, which I think is the equivalent of a GmbH.
It might have changed, but a few years ago you could go from 0 to a fully functional limited company, with accounting, business account, registered address with mail forwarding, etc. in a matter of days, from the comfort of your sofa.
In Germany you also have the UG which is like a small GmbH, with 1 eur minimum capital requirement, that is if you like like the 1k (and up to 2k) it cost to set up.
I consider myself rather smart and good at what I do. It's nice to have a look at problems like these once in a while, to remind myself of how little I know, and how much closer I am to the average than to the top.
Well it is a specialized problem. If you've never worked on anything similar previously, it is going to take time. Don't even need to interview for selective billion dollar companies like Anthropic to encounter these types of problems - after college I interviewed for various electronics/hardware companies where you'd get asked to optimize low-level code - which would have looked quite foreign, if you had never actually worked on such problems before.
If you ask an EE to debug react state management code without prior exposure they won't do too well either. But on the other hand they can easily pick up most of it after a week long crash course while training a performance engineer who can optimize code for a specific architecture would take months.
> they can easily pick up most of it after a week long crash course
I have to disagree and question what you mean by "optimization". It's very easy to write web code that technically accomplishes a task, but does so poorly. This is the natural consequence of having so many options available.
The vast majority of web devs with less than 5 years of experience simply don't understand plain javascript well enough. It's a longstanding problem that devs will reach for the most ergonomic tools, not the best tools.
Lacking sufficient experience, they can't help it. This happens in all programming languages and in all layers of software. AI slop is even worse because it tends towards the mean.
Engineering is more or less about getting familiar with the proper tools and use them to solve specific problems: add new features, debugging, refactoring and optimizing.
And the tools themselves are built by other engineers and they need new features, debugging, optimization etc. It is turtles all the way down.
But each layer has its own jargons, conventions and unwritten hacks. That is where experience comes in. Once you get out off a rabbit hole or pothole, you are one step closer to becoming the “domain expert”. There is no short cut.
>The vast majority of web devs with less than 5 years of experience simply don't understand plain javascript well enough
they are never tested on it, and many won't dig that deep in the day-to-day. Whose fault is it that they don't know plain javascript well enough? That's the result of shipping "content" over any other metric of proper software engineering.
Funnily enough I did take a mini-course (not a week, but we're talking maybe 100 hours of work as a recreational online summer class) in plain javascript at my university. Quite the quirky language. But this was in ES3 or so, so maybe there's many more guard rails these days against the core jank that makes up JS
> EE to debug react state management ... easily pick up most of it after a week long crash course while training a performance engineer ... would take months
Isn't that mostly because as you go up the abstraction layer, tools and docs to teach yourself the tricks of trade fast are in abundance (let alone a popular layer like React)? Which inturn is likely a function of incentives and opportunities.
It's because the higher up the stack you go, tools become more declarative and literate. Calling sort is far easier than understanding the algorithm for example.
> Calling sort is far easier than understanding the algorithm for example.
This was one of my gripes in college, why am I implementing something if I just need to understand what it does? I'm going to use the built-in version anyway.
Because that's the entire point of college. It's supposed to teach you the fundamentals - how to think, how to problem solve, how to form mental models and adapt them, how things you use actually work. Knowing how different sorting functions work and what the tradeoffs are allows you to pick the best sorting function for your data and hardware. If the tools you have aren't doing the job, you can mend them or build new tools.
So you know which sort to call because there isn't a right answer for all cases.
And so you can write your own because you're probably going to want to sort data in a specific way. Sort doesn't mean in numerical increasing or decreasing order, it means whatever order you want. You're sorting far more often than you're calling the sort function.
My degree was not specifically CS, it was a related degree, the focus was on landing jobs, but they still covered some CS concepts because some students were in fact doing a CS degree. I was more focused on show me what I need to build things. I have never had to hand-craft any algorithm in my 15 years of coding, it just makes no sense to me. Someone else figured it out, I'm contempt understanding the algorithms.
In my twenty years, I've rerolled famous algorithms "every now and then".
Its almost wild to me that you never have.
Sometimes you need a better sort for just one task. Sometimes you need a parser because the data was never 100% standards compliant. Sometimes you need to reread Knuth for his line-breaking algorithm.
My high school computer science teacher (best one I ever had) once told us this anecdote when we were learning sorting algorithms:
He was brought in by the state to do some coaching for existing software devs back in the 90s. When he was going over the various different basic algorithms (insertion sort, selection sort, etc.) one of the devs in the back of the class piped up with, "why are you wasting our time? C++ has qsort built in."
When you're processing millions of records, many of which are probably already sorted, using an insertion sort to put a few new records into a sorted list, or using selection sort to grab the few records you need to the front of the queue, is going to be an order of magnitude faster than just calling qsort every time.
Turned out he worked for department of revenue. So my teacher roasted him with "oh, so you're the reason it takes us so long to get our tax returns back."
Thinking that you can just scoot by using the built-in version is how we get to the horrible state of optimization that we're in. Software has gotten slow because devs have gotten lazy and don't bother to understand the basics of programming anymore. We should be running a machine shop, not trying to build a jet engine out of Lego.
I mean, the lesson I got from my 10X class was pretty much that: "never write your own math library, unless you're working on maintaining one yourself".
funnily enough, this wasn't limited to contributing to some popular OS initiative. You can call YAGNI, but many companies do in fact have their own libraries to maintain internally. So it comes up more than you expect.
On a higher level, the time I took to implement a bunch of sorts helped me be able to read the docs for sort(), realize it's a quicksort implentation, and make judgements like
1. yeah, that works
2. this is overkill for my small dataset, I'll just whip up basic bubblesort
3. oh, there's multiple sort API's and some sorts are in-place. I'll use this one
4. This is an important operation and I need a more robust sorting library. I'll explain it to the team with XYZ
The reasoning was the important lesson, not the ability to know what sorting is.
>Don't even need to interview for selective billion dollar companies like Anthropic to encounter these types of problems
I'll take any interviews at this point in time.
But yes, every domain has its jargon. I work tangentially to this and quickly understood this as a GPGPU problem. A relatively elementary one if you studied this space, though a time limit of 2 hours seems overly restrictive if you aren't actively studying this stuff.
After a quick look this is can be seen as a low level GPU/TPU optimization problem where you have to consider the throughput and depth of different arithmetic pipelines. If you want to hire people who understand how to do that you unfortunately have to give them such a convoluted task and emulate the relevant parts of HW. (In reality this is probably more like TPU since it has scalar pipelines, but the optimization methods are not that different)
The task is to parallelize tree traversal, which is embarrassingly unparallel so it's tricky.
Is that really the case? My experience is fairly limited, but I've found that the LLM's willingness to fill in plausible sounding (but not necessarily at all accurate) numbers where it needs them to be a significant hindrance when asking it to think about performance.
And how would one do that these days if they didn't spend their career doing this pre-LLM? Just expect to study and perform such projects as a hobby for a few years on the side? These are specialized problems that you only really do for a few select companies.
I mean yeah... You kind of have to learn this stuff (performance engineering) by yourself (a strong education background helps a lot of course). There are transferable parts of it and there are platform-specific parts where you need to be somewhat familiar with GPUs.
Seeks like another catch 22 when companies still care about 3-5 years of experience in industry, even if you work on some hobby projects. I'm not in this sector but I had similar struggles getting noticed in another specific domain despite studying it for a while.
Since it's a CPU, you start with the idea that there is an ALU and spiral outward from that. That gives you something concrete to wrap your head around while you climb up the abstraction levels.
However, when I hit "scratch_write" and it wasn't in the Machine class and it wasn't coming from some Decorator and it was getting defined and deleted by a member function ... I stopped. That's paying lip service to the variable typing that is scattered around and actively hampers even basic IDE usage. Probably the typing was added by AI/LLM after the fact, and it missed that unusual usage. The Python convention used to be that those kinds of variables got declared as "_scratch_write" with a leading underscore to flag that they were "private/internal".
That was the gigantic red "We write shitty code" signal or worse "We don't care about wasting your time" signal. Human review should have flagged that.
Shame. I was kinda looking forward to the technical problem, but I'm not going to spend a bunch of time using grep to untangle garbage code to get at it.
I suspect everything would actually be much clearer if you wrote it in SystemVerilog and tested with Cocotb. Let's see if their LLMs can handle that porting job. HAH!
The types on the variables. Python recently adopted "gradual typing", but it isn't enforced by default. Consequently, you may have to actually execute a Python program to determine what an unlabeled variable type is.
A lot of people write Python code and then run "AI" on it to fill in the variable types. This, of course, is error prone and shitty. And the AI will miss strange usages like the one I flagged.
Although I am sorry for phrasing it as "variable typing". I can see how you might read that as "typing that varies" instead.
The question isn't clearly written down anywhere, that's why. Presumably actual candidates would have been given more info over the phone or email. Part of the "challenge" is reverse engineering their Python; unclear if that's intentional.
If you look at the top of perf_takehome.py then there is a brief comment saying the challenge is to optimize a kernel. Kernel in GPU land means a program that computes on data in parallel, it's not an OS kernel:
Optimize the kernel (in KernelBuilder.build_kernel) as much as possible in the
available time, as measured by test_kernel_cycles on a frozen separate copy
of the simulator.
However, this kernel doesn't run on an actual GPU. It runs on a little interpreter for a custom assembly language written in Python. Thus you will be optimizing the program built in-memory by the function on this line:
Like reference_kernel2 but building actual instructions.
Scalar implementation using only scalar ALU and load/store.
The KernelBuilder class has some fields like "instrs" but we can't immediately see what they're meant to be because this is Python and types are optional. Nonetheless we can see that instructions are being added to a list, and below we can see the test_kernel_cycles function that runs the interpreter on the program. So our mission is to change the build_kernel function to make a better program. And it says this is an assembly version of the python function reference_kernel2 which is found in problem.py.
What exactly is this kernel doing? The reference_kernel2 function doesn't explain itself either - it's some sort of parallel tree walk. Let's put that to one side for a second and explore the machine, which is defined in problem.py. The machine itself is also largely undocumented, but there's a brief description in a docstring on line 66.
At this point it helps to understand the design of exotic processors. The emulator is for a fictional CPU that uses a VLIW SIMD ISA. Normal programmers will never encounter such a chip. Intel tried to make such a machine decades ago and it never took off, since then the concept has been largely dead. I believe it's still used in some mobile DSPs like Qualcomm's Hexagon. Notably, NVIDIA PTX is not such an ISA so this seems to have been chosen just to make things harder. As the comment explains, in a VLIW machine multiple instructions are packed together into a "slot" and executed in parallel. In a normal CPU the hardware reads a serial stream of instructions and works out just in time which can be executed in parallel, using fancy out-of-order circuitry. In a VLIW machine that's done ahead of time by the compiler or (in this case) the humble programmer, you. But this isn't just a VLIW machine, it's also multi-core, and multi-"engine", so there are multiple levels of execution going on. And it's SIMD, meaning each instruction can itself operate on multiple bits of data simultaneously.
This machine doesn't have registers or cache but it does have "scratch space", and so you can use the vector instructions to load data into a series of 32 bit scratch words and then do things on them in parallel. And multiple vector instructions can also run in parallel. "Broadcasting a scalar" in SIMD-speak means taking a single value and repeating it over multiple scratch space slots (or register subwords in a real machine), so you take e.g. 0xFF and get 0xFFFFFFFFFFFFFFFF.
And that's it, that's all we get. As the code says: "This comment is not meant to be full ISA documentation though, for the rest you should look through the simulator code". Possible point of confusion: real ISAs are serialized to bytes but this one is just Python tuples. The code is only partially typed; sometimes you're just left guessing.
So to recap, the problem is to optimize an undocumented program expressed in undocumented data structures returned by a Python function whose result is interpreted by a partly documented Python class that simulates a fictional exotic CPU architecture using an abandoned design that gives a lot of parallel computational capacity, but which requires all parallelism to be statically declared ahead of time, whilst simultaneously reverse engineering the Python that does all this.
Does that help? Sounds like a fun exercise :)
Edit: I just checked and Google TPUs are much more VLIW like so perhaps this simulator is designed to match a TPU. I know Anthropic rely on TPUs for serving and have done some optimization for them.
It does seem a bit of a strange challenge - a bit reminiscent of high school math problems where understanding the question was as much part of it as actually solving the problem when you understood it.
Since the focus of the challenge appears(?) intended to be optimization, not reverse engineering, it's a bit odd that they don't give a clear statement of what the kernel is meant to be computing. Perhaps the challenge is intended to be a combination of the two, but then the correct reverse engineering part of it becomes a gate for the optimization part, else you'll be solving the wrong problem.
Given the focus on results achieved by Opus 4.5, maybe that's the main point - to show how well Opus can reverse engineer something like this. If they gave the actual clear problem statement, then maybe you could brute force an optimal solution using tree search.
I just threw this prompt at Gemini, and it seems (I haven't analyzed the problem to see if it is correct), to be able to extract a clear understanding of the problem, and a specification for the kernel.
"Can you "reverse engineer" what the kernel in this optimization exercise is actually doing - write a specification for it?
Gemini says it's doing inference on a random forest - taking a batch of inputs, running each one through each decision tree, and for each input outputting the sum of these decision tree outputs - the accumulated evidence.
So looking at the actual code (reference_kernel() in problem.py), this "random forest inference" is completely wrong!
It's doing some sort of binary tree traversal, but the hashing and wrap around looks weird - maybe just a made up task rather than any useful algorithm?
This isn't "reverse engineering" it's merely "being able to read fairly simple code you didn't write". A much simpler version of the kernel is provided at the end of problem.py as reference_kernel2.
If you can't make sense of such a small codebase or don't immediately recognize the algorithm that's being used (I'm guilty of the latter) then you presumably aren't someone that they want to hire.
Fair enough, and there are clues in the comments too, but why not just provide the specification of the kernel (inputs and outputs) as part of the problem?
They do. They provide reference_kernel which shows the algorithm itself, build_mem_image which shows the data format you will be working with, and finally reference_kernel2 which implements said algorithm on said data format.
They then provide you with a very naive implementation that runs on their (very simple) VLIW architecture that you are to optimize.
If at the end of that someone is still lost I think it is safe to say it was their goal that person should fail.
Well, yes, they have a reference implementation as documentation, just as they have the simulator as documentation for the ISA ...
The problem is about pipelining memory loads and ALU operations, so why not just give clear documentatation and state the task rather than "here's a kernel - optimize it"? \_(ツ)_/
Presumably that is only one of two purposes, with the other being to test your ability to efficiently read, understand, and edit low level code that you didn't write. I imagine you'd regularly run into raw PTX if you worked for them in the relevant capacity.
And perhaps a third purpose is to use the simulator to test your ability to reason about hardware that you are only just getting familiar with.
I would assume that anyone optimizing kernels at Anthropic has full documentation and specs for what they are working on, as well as a personal butler attending to their every need. This is big money work - every 1% performance improvement must translate to millions of cost savings.
Maybe they specified the challenge in this half-assed way to deliberately test those sorts of skills (even if irrelevant to the job), or maybe it was just lazily put together.
The other thing to note is that if you look at what the reference_kernel() is actually doing, it really looks like a somewhat arbitrary synthetic task (hashes, wraparound), so any accurate task specification would really need to be a "line by line" description of the steps, at which point you may as well just say "here's some code - do this".
In a fast-paced domain such as this one, and especially wrt the (global) competitiveness, development/leadership process is most likely chaotic and "best" practices that we would normally find in other lower-paced companies cannot be followed here. I think that by underspecifiying the assignment they wanted to test the ability of a candidate to fit into such environment, apart from the obvious reason and which is to filter out not enough motivated candidates.
> but which requires all parallelism to be statically declared ahead of time
this is what all specialized chips like TPU/Cerebras require today, and it allows for better optimization than a generic CPU since you can "waste" 30 min figuring out the perfect routing/sequencing of operations, instead of doing it in the CPU in nanoseconds/cycles
another benefit is you can throw away all the CPU out-of-order/branch prediction logic and put useful matrix multipliers in it's place
This is nice writeup. Thanks. Another commenter said will've taken them 2h just to sketch out ideas; sans LLMs will've taken me more than 2h just to collect all this info let alone start optimizing it.
It took me about 10 minutes to generate that writeup the old fashioned 100% organic way, because one of the things that's unspecified is whether you're allowed to use AI to help solve it! So I assumed as it's a job interview question you're not allowed, but now I see other comments saying it was allowed. That would let you get much further.
I think I'd be able to make some progress optimizing this program in two hours but probably not much. I'm not a performance engineer but have designed exotic emulated CPU architectures before, so that helps a lot.
I've not written a VM before, but the comments in perf_takehome.py and problem.py explain the basics of this.
I gleaned about half of this comment in a few minutes of just skimming the code and reading the comments on the functions and classes. There's only 500 lines of code really (the rest is the benchmark framework).
Same thought. I doubt they provided additional explanation to candidates - it seems that basic code literacy within the relevant domain is one of the first things being tested.
On the whole I don't think I'd perform all that well on this task given a short time limit but it seems to me to be an extremely well designed task given the stated context. The reference kernel easily fits on a single screen and even the intrinsic version almost does. I think this task would do a good job filtering the people they don't want working for them (and it seems quite likely that I'm borderline or maybe worse by their metric).
I'll be honest, that sounds like the opposite of fun since the worst parts of my job are touching the parts of a Python codebase that are untyped. The sad part is this work codebase isn't even that old, maybe a few years, and the developers definitely should have known better if they had anyone capable leading them. Alas, they're all gone now.
Harder than figuring out the instruction set for some exotic CPU are definitely the giant untyped dicts/lists common in data science code.
On the one hand, this exercise probably reflects a realistic task. Daily engineering work comprises a lot of reverse engineering and debugging of messy code.
On the other hand, this does not seem very suitable as an isolated assignment. The lack of code base-specific context has a lot of potential for frustration. I wonder what they really tested on the candidates, and whether this was what they wanted to filter for.
Generate instructions for their simulator to compute some numbers (hashes) in whatever is considered the memory of their "machine"¹. I didn't see any places where they actually disallow cheating b/c it says they only check the final state of the memory² so seems like if you know the final state you could just "load" the final state into memory. The cycle count is supposedly the LLM figuring out the fewest number of instructions to compute the final state but again, it's not clear what they're actually measuring b/c if you know the final state you can cheat & there is no way to tell how they're prompting the LLM to avoid the answers leaking into the prompt.
I guess your answer to "Try to run Claude Code on your own 'ill-defined' problem" would be "I'm not interested." Correct? I think we can stop here then.
You're missing the point. There is no evidence to support their claims which means they are more than likely leaking the memory into the LLM prompt & it is cheating by simply loading constants into memory instead of computing anything. This is why formal specifications are used to constrain optimization. Without proof that the code is equivalent you might as well just load constants into memory & claim victory.
Do you make a habit of not presuming even basic competence? You believe that Anthropic left the task running for hours, got a score back, and never bothered to examine the solution? Not even out of curiosity?
Also if it was cheating you'd expect the final score to be unbelievably low. Unless you also suppose that the LLM actively attempted to deceive the human reviewers by adding extra code to burn (approximately the correct number of) cycles.
This has nothing to do w/ me & consistently making it a personal problem instead of addressing the claims is a common tactic for people who do not know what it means to present evidence for their claims. Anthropic has not provided the necessary evidence for me to conclude that their LLM is not cheating. I have no opinion on their competence b/c that is not what is at issue. They could be incompetent & not notice that their LLM is cheating at their take home exam but I don't care about that.
You are implying that you believe them to be incompetent since otherwise you would not expect evidence in this instance. They also haven't provided independent verification of their claims - do you suspect them of lying as well?
How do you explain the specific score that was achieved if as you suggest the LLM simply copied the answer directly?
Either they have proof that their LLM is not cheating or they don't. The linked post does not provide evidence that the LLM is not cheating. I don't have to explain anything on my end b/c my claim is very simple & easily refuted w/ the proper evidence.
I don't have any insider information on what they know or don't know so you're welcome to keep asking nonsensical questions but eventually I'll stop answering.
- Optimize the kernel (in KernelBuilder.build_kernel) as much as possible in the
available time, as measured by test_kernel_cycles on a frozen separate copy
of the simulator
It comes with test suites, so that gives you a base to start from. You can at the very least do trial-and-error and come up with some heuristics on the fly. You're at a huge disadvantage to someone who has some familiarity but can convincingly play it off as being a newcomer, though.
Yours is a good mentality to have because it creates the emotional drive to learn more, so don't lose that. That being said, this isn't really that complicated. Its just a matter of taking enough time to look at the code and understand how its structured. I feel like the thing that differentiates developer skill is pretty much being able to do that, specifically in the process of having the model of the program in your head.
For me, I've had that mentality for the longest time and I didn't get anything done because, well, "I'm just average".
For me, a little bit of arrogance (there's no way I couldn't do X, let's go do it), even if I end up "looking stupid" (see, I told you it was that hard!), was far more valuable to my development
disagree. nobody has a monopoly on what metric makes someone good. I don't understand all this leet code optimization. actually i do understand it, but it's a game that will attract game optimizers.
Yes, this applies to some simulated imaginary CPU with an artificial problem. Except that the job asked here is exactly the core of what a performance engineer will do at anthropic: optimize kernels for their fleet of GPUs. Is it simplified? Yes! (e.g. the simulator does not restrict memory access patterns)
This is a real-world problem adapted to a lab setting that can fit in one's head in a matter of hours. Leetcode would have you reimplement the hashmap used in there.
Also leetcode does not really provide insight into ones ability to design business solutions. Whether it be system design, just some small feature implementation or communication skills within a team.
Its just optimizers jerking each other off on some cryptic problems 99.999999999% of developers will never see in real life.
Maybe it would've been useful like 30 years ago, but all commonly used languages have all these fancy algorithms baked into their stdlib, why would I ever have to implement them myself?
But this is an interview problem at Anthropic, not at your local CRUD factory. They _are_ looking for the optimizers, because they _are_ working on cryptic problems the 99.9999% of us will never encounter.
Understanding basics is very different to being able to memorize algorithms. I really dont see why I'd ever have to implement stuff like quicksort myself somewhere. Yes I know what recursion is, yes I know what quick sort is, so if I ever need it I know what to look for. Which was good enough throughout my career.
Am I missing something? I've kept the iPhones I bought for 6 years or so. I replaced the battery on each phone, and all it cost me was 50€ and half an hour waiting for the local non-Apple phone shop to do the work. That surely counts as batteries being replaceable in all but name?
I'm happy that worked out for you, but the whole cryptography signature of Apple batteries that throttle your phone if you get the wrong one is VERY different from "just pop out the back and get your new battery in".
I feel like the price Apple charges for batteries is very reasonable. I kept my phone going for 4.5 years thanks to a battery replacement 2 years in. They’re basically doing it at cost, considering parts and labour.
They built an entire identity around it. They're in too deep to just back off and admit to being wrong. It's the same reason why doomsday cults are stronger and more united the day after the predicted end of the world: It's too late to back off, the only solution is to dig deeper.
Share of the world's GDP is a flawed metric. It tells us we're getting a smaller slice, but it doesn't tell us if the pie grew or shrunk. If the EU grew by 50% while India and China became 200% richer, then on paper the share of the world's GDP would be dramatically lower, while everyone would be better off.
I don't disagree with the sentiment you expressed at all though.
The logic isn't flawed. If you are a European investor, then you care about the returns in your currency, and the fact is your pile of money only grew by 4%.
Inversely, as a US investor, if you invested 100€ in the eurostoxx 50, your pile of money would have grown to about $140 (20% index growth, 15% dollar debasement). It absolutely makes a difference, that's $20 more in your pocket compared to the index.
Your comparison with temperatures is wrong. Celsius and Fahrenheit are fixed units, whereas the value of currencies fluctuate.
It is flawed because it's conflating two different independent variables. It's also looking for a specific weak point where there is none, more as if it is trying to state a narrative.
If you want to talk about EURUSD then just state it.
> If the DOM is the authority on state (not a projection of state held elsewhere), there's nothing to sync. You read state from where it already lives.
But it's not, is it? The state lives in the server. If I open a page, and let it open for 2 hours, then there's no guarantee it actually represents the current state.
You are conflating all state into a big undifferentiated soup.
I reason about state differently, Here are my axioms:
1. There is no such thing as "state".` There is user/ui state, there is cache state, there is db cursor state, etc. trying to manage all of that as big undifferentiated ball is not proper separation of concerns.
2. what I call User State is the user's exceptions that they will find things "as they left them". If as a user, I do a pager refresh, I expect that my user preferences for colors and fonts will still be there. that the sorting on the table I was looking at will still still be there, that the filters I had applied to the table are still there. That is I was editing a record but had not saved, I will still be editing that record without data loss on screen but still not saved to the db (unless the app is explicitly autosave). In a word: the user state is what the user can see.
3. Ideally my User State has a single source of truth, the user can always confirm it by looking at it, and is therefore IN the front end.
4. Most, if not all, other state is stored on the server. things like the user's auth state is a -backend- concern, and should NOT be stored in the frontend.
5. class-oriented programming is extremely difficult to manage: as I like to say, every class is a petri dish for state corruption. I prefer pure functions on the backend for the reason I outline ion the dataos.software blog articles.
Pure functions are mostly deterministic (especially if you avoid floats). So you can not only count on getting the same results for the same query, you can cache them too. And you can test them trivially. Integration test? What's that?
When you capture the User State from the DOM (using a manifest so that you do not need to capture the whole DOM) and send it to the pure function on the back end, you have a perfect event source pattern. Not only can I tell you exactly who and what triggered a particular piece of html being sent o the screen, I can rewind and reply like a tape recorder.
BTW, I have no hate for React. There are some things I think it does better than any other option. I was a core member of the React Studio team (expired cert on the site but safe to visit for archeological purposes).
reply