Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Imagine if you could just send it the whole “page” worth of JSON. Make an endpoint for /page/a and render the whole JSON for /page/a there. Do this for every page. Don’t force your front-end developers to send a bunch of individual requests to render a complex page. Stop annoying them with contrived limitations. Align yourselves.

Why not just send HTML as a single response at this stage? Sometimes it feels like we are doing web development more complex than it needs to be.



How did we get to the point where simple and effective server-side pages are either unknown as a way of doing things or are considered to be a technical heresy?

Younger developers scoff at us fossils and the olden ways, but this is exactly how this ignorance of classic, tried-and-true, performant patterns results in atrocious complexity and performance, only because "this is how things are done".

It HAS to be a BFF, it HAS to be 15 different microservices and 20 different AWS toys, right? There is no other way? Are you sure?

It seems like we are talking about the whole industry, but it's largely he Node ecosystem in my experience (which is effectively - the industry). It's not like there is not Rails development being done out there, with nothing but a trusty Postgres database.

Ironically, our existing architecture patterns are beginning to resemble J2EE - a bulky, slow, expensive, boiler-plate-laden process where a simple site costs one million dollars and performs like shit.


> How did we get to the point where simple and effective server-side pages are either unknown as a way of doing things or are considered to be a technical heresy?

Demand for interactivity. A real demand - that’s why desktop apps are now web apps.

It can be over applied, sure, but there are great reasons not to serve html templates. The trend is not some massive incompetence.


Incompetence is the wrong word, but in an industry that grows so fast that 3 years of experience is considered senior, many people aren’t even aware that the old ways are an option.

In my experience, the percentage of large react sites where someone sat down and seriously considered using old fashioned server side rendering, but decided on React because the site actually required a huge amount of interactivity is close to zero.

Every React site I’ve ever been involved with in in the last say 7 years was using React because “that’s the way it’s done”, and anything else was just a post hoc justification.


I built a website that was super simple using server side rendering using Django. Then I got a bunch of feature requests that made it less simple and the client-side javascript started getting crazy spaghetti code.

I actually somewhat knew React before this project but besides not wanting to overcomplicate things I was hesitant because I didn't like pure client-side rendering. Then I learned NextJS makes it easy to mix/match client-side rendering and server-side rendering so I just switched to that.

So now, even for a simple website, I'll probably start with React because I don't want to dig myself into a needless hole.

I wrote about the experience here:

https://www.billprin.com/articles/why-i-ditched-django-for-n...

And yes that post upset some people who thought I should use htmx, but React is actually pretty easy and simple. Now most of my websites are React because I know it so it's easy and simple to use what you're used to.

Also, "I did it in React because that's the way it's done" isn't the worst reason because you benefit from that popularity, e.g. some library you need will have clear examples of integrating with React.

I think the frontend and JS community have certainly gone off some crazy rails at points, but I also think there's a popular sentiment on HN exaggerating how off-the-rails things have gone, usually expressed by people who don't actually develop many modern-looking websites. Most of the trends have gotten popular for somewhat rational reasons, and even that stuff that has gotten a little crazy like an explosion in dependencies, is really just about tradeoffs e.g. lots of dependencies cause lots of problems but also enable code re-use so it has pros and cons.


Software engineering isn't a hard science, it's impossible to be 100% certain in one of these debates. You could be right and I could be wrong.

But I've been doing this for a long time, and that's what everyone says. I've heard this argument at least 100 times when someone is trying to defend their decision to build something overcomplicated. The number of times they ended up actually being justified is so small that I'm very suspicious of this argument.


I’ve had the same experience of going fully server rendered to keep complexity down and ending up with having to turn down requests for more interactivity to prevent the code from turning into a spaghetti mess. Maybe not necessarily a bad thing to have less interactivity but it’s definitely a trade off to be very aware of.


I really enjoyed your blog post! (You have a broken link to a time zone post, FYI.)

If you're not expecting to need fancy client side stuff, does next.js give you as good a server side development experience as Django/Rails/Laravel? At that point, it would seem worth it "just in case", as you suggest, but in the past when I looked at it, the SSR stuff still felt a little cobbled together and experimental. I would love to hear from someone with firsthand experience switching, though!


Thanks for the kind words and the heads up on the broken link.

I think Next has a great DevEx, but I'm just using it for some projects as a solo dev without a ton of traffic so perhaps there's some rough corners I haven't experienced yet.

I do think they are going down some risky paths where they do things like rewrite your code files depending on whether you're on server or client, and in general with "framework-defined infrastructure" where if you write certain functions the framework "knows" that it's supposed to be on the server or in a background job or whatever. But so far I've gotten the benefits without the drawbacks, though I might be jynxing myself.


There's also an interactivity requirement (as in, this is impossible without interactivity) and an interactivity desire (as in, UX folks want an interactive feature).

So many websites now pretend they're in the former category, but would work just fine if they implemented a less-interactive form. That's how it was done in the old days! "No, you can't have that. Here's the best we can do." "Oh, that's fine."

But I expect more of what's driving complicated backend/frontend split design these days is shipping the org chart (in companies large enough to have frontend and backend teams) and technology-by-consultant (where the right technology is whatever you can get a consultant in).


If you want to build a moderately complex interactive experience that doesn’t suffer from roundtrip latency on every single interaction, you need to recreate the interactive parts of DOM in JavaScript anyway (ignore wasm), else you need to do imperative updates which quickly become a nightmare. jQuery-oriented development is a classic for sure, doesn’t mean people want to go back to it.


Because JavaScript rules web dev and the JS community has pushed so many mainstream ideas that have turned out to be duds. It’s a major echo chamber due to its size relative to other web dev communities.

I also partly blame the careerist obsession around always learning new technology, because it’s becoming obvious that a lot of the “new stuff” over the last decade created more problems than solutions. Don’t get me wrong, learning is an important part of the job, but we’ve created a culture of constantly pushing new technology to pad resumes and feed our insecurities around irrelevance instead of solving problems.


Hype, which the web/JS community seems particularly vulnerable to, also plays a big part. Yes proven tech is safe and reliable, but choosing it is so much less fun than getting swept away by the tide of whatever the newest "revolution" is. Don't you want to be part of the wave?


Personally, not for the day job. I love experimenting with new stuff in my free time but when the pressure is on to deliver for the business, I want to go with what I know works.


It seems to me that the more people use <thing>, the less evidence of quality of <thing> is required by the people using <thing> - everyone just assumes everyone else knows what they're doing.

It becomes something akin to a religion - after all, 2 billion Christians can't be wrong, can they?


> everyone just assumes everyone else knows what they're doing.

It’s not just that. There are practical reasons to choose the bandwagon, even if you know it’s not the technically optimal approach.

If you’re planning on hiring junior and mid level engineers, for example, it’s a great help if they already know the technologies involved. Your hiring pool for React is a lot bigger than for a more obscure tool. Additionally, the popular tools also tend to be stable and long lasting, with good support.

So, there are compelling business reasons to choose what’s popular, even if you aren’t making assumptions about the technical quality just because its popular.


Eat shit - billions of flies cannot be wrong. (cit.)

More seriously: once a bit of tech becomes mainstream, it becomes politically easier to use it. Your manager has not heard of <cool tech>, but he did read about React in some business magazine. So when the choice is between React and bheadmaster's Funky Funkness Framework, he'll go with React.


No one ever got fired for choosing React


But some of them should have been.


Can you pick a less antagonistic example?


Would you prefer I used Judaism instead? Or Islam?

Or do you consider my point about network effect in tech stacks being similar to religious beliefs to be antagonistic by itself?


The adage I learned decades ago was "a billion Chinese can't be wrong".

If PP had used that, would it have been antagonistic? Racist?


I thought the origin of this expression was the Elvis greatest hits album 50,000,000 Elvis fans can't be wrong:

https://en.wikipedia.org/wiki/50,000,000_Elvis_Fans_Can%27t_...


I'm torn. On the one hand, there are many websites I hate interacting with because they are unnecessarily structured as JS clients exchanging data with a remote server via a network link. On the other hand, it's easier for me to develop libraries and CLI clients of my own, these days getting quite far with Firefox dev tools' network view and "copy as curl," and only occasionally having to actually read the code in those JS clients. In the old days, I would have to resort to some ugly and comparatively brittle page scraping.

This new world sucks for the interactive user in a browser (which is me often enough), but it's great for the guy who wants to treat a website as just a remote provider of some kind of data or service (also me often enough).


> How did we get to the point where simple and effective server-side pages are either unknown as a way of doing things or are considered to be a technical heresy?

We are not at that point nor we were ever at that point, or close to it.

Server-side rendering has been a hot topic in JavaScript framework circles for over a decade, and React famously solved that problem so well that it is now a standard, basic technique.

Also, just because you throw the same buzzword around that does not mean the problem stayed the same. Reassembling a javascript-generated DOM with a coherent state is not the same as sending a HTML document down the wire. And you know why people started assembling DOMs with JavaScript? Because sending HTML documents down the wire was too slow and did not allowed for any control over aspects that dictate perceived performance.


The current SPA paradigm was adopted to solve a number issues that engineering teams were facing. Your sentiment is one I'm seeing a lot more as of recently, which leads me to believe we're approaching the end of the current web development paradigm. There's too much time wasted writing _glue_ when developing SPAs: database queries, back-end controllers, data serialization, network requests, front-end state, shadow dom, etc. Lots of frameworks coming out that essentially remove the need to write glue code. I expect this trend to continue until we reach the point where you are mostly just coding a few things: schema, non-generalizable business logic, and presentation.


Blazor Server, while definitely not the right thing for all applications, is the best solution I've found to this.

You write all the code in one language (C#), and it streams the changes to the clientside as needed automatically. It means you can call the database in your 'frontend' HTML layouts directly and gets rid of all the glue code. You have no serialization problems because you are using the same classes everywhere, and the component structure keeps things very clean.

I would say I am about 5x more productive in Blazor Server than any other frontend technology. You don't need to write APIs at all. It's just like writing static server side rendered pages but you can do dynamic stuff. It is honestly like magic.

It does come with some huge downsides though, in that it needs to keep a websocket open to stream all the DOM changes from the server, and this also has memory/performance issues on the server (needs to keep state on the server all the time). Which basically rules it out of anything that needs to work over intermittent connections (mobile), and very high scale stuff (probably could be done with enough server RAM, but it's not the right tool for that).

However, it's perfect for boring line of business applications that will only be accessed on solid internet connection, which is many of them.

There is also Blazor Webassembly which removes the websocket part, and lets you code frontend and backend in C# still, sharing a lot of classes, though you do need some glue code to connect the database. I've heard people are having good results with gRPC for that; but haven't looked into it much (blazor server works fine for most apps IMO).


The need for the open websocket and a heavy server is why I switched from Blazor to HTMX.


Not super famliar with blazor, but something like this sounds like what I would consider the halfway point to the next paradigm. I think you have to go further in simplification of the back-end as well.


Is this kind of what Vaadin does?


Hadn't actually heard of that. Might be similarish but Blazor uses standard html/css/JS and just sends DOM diffs down the wire. Don't think at first look Vaadin works in the same way.


Ah ok. So the server keeps track of what the client should look like and just keeps it up to date?

If so that sounds like a very nice model that would make it a lot easier to get back to just using simple HTML and delegate the complexity to the framework or server process. Are there other projects beside Blazor that use this type of approach?


As far as I can tell, the SPA paradigm solves exactly one issue: Whiny users saying "but it flashes and reloads the header and navigation between page views and desktop/mobile apps don't.".

Since HTML never had a proper way to compose content from multiple sources-- frames being as close as it got-- we have to instead punt this, like so much else, to JavaScript hairballs to glue it all together.


It is also touted as a way to grow engineering talent pools by subdividing into frontend and backend teams. This is not super convincing to me in a context where the only frontend is HTML given the enormous amount of extra work that a SPA takes to develop. It is more convincing in a context where there are native applications in the mix that would benefit from an API. So what will a future that involves both MPA-style frontends and APIs look like? I think one possibility is that backends will handle data in a more declarative fashion. In theory, doing so would allow a lot of the "glue" to be generated, and ultimately allow more flexibility on the frontend.


I think it's the difference between generations of people for whom JS was an extra thing to enrich HTML documents, vs. younger generations that see HTML as merely a rendering target for JS.


And we got here by having javascript as an embedded scripting language in things like access control management. "I'm already using javascript on the server in my polkit config!"

https://www.freedesktop.org/software/polkit/docs/latest/polk...

https://discourse.ubuntu.com/t/use-of-javascript-rules-in-po...


Not sure what a BFF is, but having separate frontend and backend development doesn't mean you have 20 microservices.


I think BFF might stand for Backend For Frontend in this context


"A BFF layer consists of multiple backends developed to address the needs of respective frontend frameworks, like desktop, browser, and native-mobile apps."

Ok. Yeah that's only for huge scale things.


A BFF is just an API gateway that is tailored to a single frontend. It's especially helpful if the frontend is a native application as opposed to a web application, hence why it was a pattern championed by Netflix. If you have some native client out there and you can't guarantee that the user will update it, it's helpful for it to only talk to a single service so in case you want to evolve your backend architecture, you can update the BFF implementation and the existing native app won't be broken by your backend changes.

Having a native app out there in the wild on customer computers that is coupled to a bunch of different microservices is hell for when you want to change your backend without breaking your customers' experience and forcing them to update.


The backend-per-frontend thing makes sense for Netflix. They're huge scale like I said, plus they've been around a while, so a lot of backwards compatibility issues will arise. I can't see that being a default pattern to jump to, though.

Backend that handles all the microservices so the frontend isn't doing that job, sure. I didn't even consider doing it differently than that.


Huge scale typically leads to the opposite: more backend generalization, not catering to a specific front-end. That's why Facebook built GraphQL — because they run on anything, like fridges, TVs. That's specifically an attempt to avoid tailoring a backend to a frontend. The article argues that at small scale you shouldn't try to build a generalized backend for any imagined frontend. It's exactly how you would get unnecessary performance and complexity bottlenecks, when you really just need to serve your data the 1 or 2 ways that your front demands.


Yeah, our org has a big problem with over-generalization in our internal services that I've been pushing back on. Due to the large size of our org, they're under the false impression that our services are large-scale, but they're actually very small compared to anything external-facing.

When a partner team filed a feature request on our team, and I gave them an API that just does exactly what they need, they were like "that's it? No 1GiB graph response to DFS through?"


Most of the time time BFF just means that instead of having the frontend call endpoints A, B, and C. It calls a single endpoint D, which calls A, B, and C and handles aggregating and transforming the data into the specific form the frontend needs.


Eh, I just call that "good design" under most circumstances.


I don’t find that to be an accurate representation of BFF. I’ve used BFF with a browser FE to simply weave several microservice endpoints into a tailor-made api for the frontend. This reduces the burden on the microservices by removing the need to make many endpoints for clients and helps reduce network calls on the FE


It’s because if you want to continue getting a job, you want to use what other people are using.

Using most static page frameworks is like using ColdFusion for your career advancement.


We have a boot strapped and profitable app with thousands of customers written in Django that has been operating fine since 2009 with new entrants raising and spending millions to build fancy BFF/Micro Service solutions, hiring sales people, losing money for years, only to publicly state they have finally gotten to 300 customers.

Job security is relative.


It’s because you fossils keep thinking server-side rendering is the right call for everything and won’t consider other possibilities.


Feels like the author misses the point of an SPA. If you have website where every "view" is a /page/a etc., SPA might not be the best choice. But in an SPA i can be dynamic, and trigger data retrieval/sending without route changes.

A very basic example is if I have a checkbox that allows the user to subscribe to push notifications. On click I sent a xhr request to the backend POST `/subscribe/1234` which registers the subscription. Next time I'll check `/subscriptions` or similar to see if the checbbox is in "checked" state (i.e. we are subscribed). This is basic functionality which requires a concise REST/JSON API, and has nothing todo with your page/route layout à la /page/a, /page/b etc.


Author here.

We do this for SPA. The POST can be supported just like you explained. Then you can either reload the "page" to see that the checkbox is checked, because backend will include that as part of the page, or (as an optimization) reload just that checkbox if you created a specialized resource for it. The entire page is still there for you to fetch upon transitions.

Among pages, you will still need a few individualized resources sprinkled around for optimizations/reloads/fragment-paginations. The difference between that and a complete API is that they will be entirely design driven. If a table needs paginating, and is built with data from multiple resources, you will not provide 2 resources, and expect front-end to paginate and glue them together. Instead, you provide a complete paginated resource for this table, where data is ready to be displayed as-is.


That us exactly how stuff worked around 2005. And no, you should not do a full form post just to toggle a checkbox.


Why not?


As a user, I can think if a couple of reason immediately:

- I don't like page (re)loads. They are usually slower and more likely to fail, compared to a lightweight request. Especially in scenarios with a bad connection

- If they fail, it's harder to retry, I see a connection timeout page. With a SPA I see an error message, potentially with a retry button. Or even better: the SPA retries for me a few times.

- I can continue to see all the rest of the page while the action is running

- I can potentially start other actions in parallel

- I prefer to not lose any progress of things on the side, e.g. text-boxes where I already entered/changed some text

- It's easier to inspect what goes wrong by looking at the network tab. Okay, most users don't do that, but for me it's still a pro

There are also advantages, but I think nowadays the cons outweight those for me.


I’m convinced this is HN-themed satire.


It seems sincere — unlike your comment, which comes across as very off-hand / dismissive.


Yeah, sorry, but I just don't agree with the spirit of the comment, and I think it's thinking like this which leads programmers to over-engineer web applications, which usually harms user experience and wastes company money.

Absolutist statements like "no, you should not do a full form post just to toggle a checkbox" are just silly. There is no minimum bound beneath which a standard web browser form submission doesn't make sense, and writing extra JavaScript code to not only manage an asynchronous network request but also handle subsequent behaviours for success, failure, timeout, etc., is additional complexity which incurs additional cost and a greater potential for system failure.

Sometimes these implementation details and associated costs are necessary, but in most cases they aren't, and it's not an ideal perspective economically to default to the more expensive and complex implementation, especially for dubious benefit.


I wasn’t talking about whether you were right or wrong. I was calling you out for being a dick and not contributing constructively to the discussion.

You’re talking past people, not taking the time to understand what they’re saying, and then being dismissive when they reply.


Ok. I don't see it that way. Most SPAs on the internet should never have been SPAs and it's usually poor judgement which leads a project in that direction. That poor judgement includes things like the characterisation of full page reloads as overly cumbersome and asynchronous requests as being lightweight. It also includes the expensive decision to try to reimplement the native browser behaviour that you get for free, as the commenter alluded to by suggesting asynchronous requests can be programmed to show a custom UI for error messages and use custom behaviour for retries of failed requests.

I believe I have indeed taken the time to understand what people are saying in these comments specifically and on this topic more broadly. And I just don't agree with the opinions presented. I think they're silly, in the same way that microservices are almost always adopted for silly reasons. The popularity of either approach doesn't negate their silliness.

For more on this topic, this article is good: https://www.timr.co/server-side-rendering-is-a-thiel-truth/


You are doing it again...


I mean, you asked a question and I gave some reason that I could think of. How exactly do you "disagree with the spirit of my comment"?

If you don't want to hear an answer, maybe you should just not ask.


> Absolutist statements like "no, you should not do a full form post just to toggle a checkbox" are just silly

That is because you didn't read or accept the frame that I set initially. I was very clearly refering to web apps that are SPAs (not websites!). Within that context, that statement is less absolutist and still true in 95% of the cases.


ruby on rails has been able to do all of this with remote forms for at least 10 years now


Very sceptic that RoR can obtain/interact with the browser push notification API as I stated in my example. You'd have to come up with glue JS code yourself, and that is where it gets messy. Pushnotifications just being one of many APIs that are client-only (and again please, if you don't need to interface with those APIs, then you might not need an SPA in the first place).


Just because "remote forms" have forms in their name it doesn't mean that they are conceptually the same thing. They are not, so obviously they can achieve those things, but they are built with js.


Having form elements automatically update the back end is extremely annoying UI imo. There's usually no indication that it's actually doing that, and if there is I generally don't trust it.


Yeah, this goes back a couple of decades where "page a" was just "page-a.asp" or "page-a.php" and you did everything to do with that page in that file. Simple, and you could mostly make changes to page a without worrying that you'd break other pages.

Need page b? Copy page-a.asp to page-b.asp, gut the page-a logic, and put in the page-b logic.

Before you get too nostalgic for these simple days, do we rememember why we stopped developing web pages that way?


Because you can't have a lot of itneractivity with a nice responsive UI like this if your network has high latency and low bandwidth. So we pushed a lot of logic to the browser. Now it turns out it doesn't scale well, so maybe we should go back to copying page a to page b and calling it a day, same way we came back to static linking once disk space stopped being limiting factor.


> ...do we rememember why we stopped developing web pages that way?

Because we needed an API for our mobile apps, and once we had that, SPAs seemed convenient.

It was definitely, absolutely NOT because the page-centric way of building sites didn't work (it does), or because SPAs are better (they aren't).


I’m a systems engineer and SPA’s are a superior architecture to deal with from a delivery POV. It is very difficult to cache a dynamic web app with a CDN. There are ways, like SSI, managing variants, Purging, and so on, but a huge pain. And if you don’t have a sophisticated CDN and are using something like Cloudfront, forget about all those fancy features.

SPA? I can serve the whole thing with static files and remove 90% of the traffic from my web server. The API can be served from a different path from the static files. API requests are super tiny and more tolerant to fluctuating network conditions, and you can hide a lot of that with Javascript.


I have a hobby project that's very static: a website for viewing webscraped snippets of writing from another site. I hosted it on Github Pages. I didn't want to use a static site generator which would've generated easily 10,000 files and with all the duplicated code would have put me above GitHub's 1GB limit for Github Pages. Similarly Cloudflare Pages has a 20,000 file limit.

So instead I combined it into various JSON files. Now I have a SPA downloading the JSON as necessary, and don't need any API/backend/database. I'm considering moving to sqlite now there are official WASM builds.


Exactly. Even non-SPA frameworks (such as NextJS with static builds) mean you can compile all of your frontend code to S3 (where it's a negligible cost) and only pay for the API instances.


Because for the last 10 years, “code schools” and boot camps have taught JavaScript/Typescript with React as “programming” where everything outside of your immediate dom is an api call. One not worth knowing about since it’s “backend”. The reality is SSR has never been a bad idea, we started that way. Now we are seeing reimplementations of it in various forms to “solve the api call hell” that was self induced. Just like the front-end bundle size problem was self induced.


> Imagine if you could just send it the whole “page”

Could have stopped here


Maybe for separation of concerns between data and UI? The browser is not going to be the only client?


you can have seperation of data and UI and still send the whole page.

The vast majority of the time, the browswer is infact going to be the only client.

once you reach the point where its not, then build the api to suit the new client rather then trying to build one that does both.


True, you could insert something in the middle, that generates a page depending on the user-agent. But then it requires extra SSR logic.

But on the other hand, you can have a iOS or Android based app that depends on the same data. So the browser is not the only client in general, especially nowadays, I think?


That's the crux of the question. The BFF approach says that you remove logic from the client and put it in a server-side component. Not that you add extra logic. Then you might find that you need to copy that server-side component sideways for a new client, if the view layer in that client is sufficiently different to what you already have. The browser can be the only client of its BFF API; the mobile apps can either speak to the same API or have their own.

Dramatically more important than any theoretical, abstract consideration in deciding whether this can work is what your teams look like. The client team should almost certainly own the BFF API, which has implications for skillsets and maturity.


That's only for a react type of application if I'm not mistaken?

If so, that doesn't handle the general case (native apps for instance)


No, it's for everything.


Could you explain further? I don't understand.


Well now you’re building a general purpose API


I think there is a middle-ground concern here: avoiding request waterfalls.


That's what the database is, isn't it? The separation layer between the raw data and how it's projected into some interface (user or otherwise).


Yes in a way. There are a bit more layers in-between.

In fact, you can see it as 2 decoupled kinds of database:

- the storage database

- the working in-memory UI database that holds the data represented (e.g. on your screen (GUI)).

The goal in general is to modify the data contained in this UI database and then to push the changes to storage.


This breaks once you need to update parts of the page without reloading everything.


JS can update the DOM dynamically.

Nothing prevents the main response from being served as regular HTML and subsequent XHR request for some parts of a page being JSON or some such data format. In fact I bet that sending server-side rendered HTML is faster than sending a full structured page as JSON which has to be decoded, parsed, and then converted to HTML.


I run a popular app (native ios and android) and a web version. I find having A single json serving api much easier to maintain. The frontends are isolated projects. The backend is just a simple layer with static data served as json. If you would have static html it means you need to update all static files whenever a ui change is needed. Makes it overly complicated. Caching becomes also a challenge and the so does the performance. Currently just running a few Hetzner instances to serve 50k requests per second. That would be really hard with a server side rendered html.


> If you would have static html it means you need to update all static files whenever a ui change is needed. Makes it overly complicated

We do use a templating engine and separate CSS files, so no, not at all, all static files does not need updating on every change.

> Caching becomes also a challenge and the so does the performance

Not at all, not even on black fridays.

> That would be really hard with a server side rendered html.

Why do you pose that hypothetical when our real-world experience is easy?


Do you build each interface with different tools (e.g. Swift, Java, and React), or one of the unifying tools (Flutter, React Native, etc)?


Then you're stuck with two kinds of templating.


Reminds me early AYAX days.


That's true. But that's only really a concert if you're building an SPA. If you're not, having multiple pages is generally faster than running some js framework to reload parts of the page. Plus that you get the benefit of every state combination having it's own URL. You don't need to write any custom state reconstruction code.

This was the original design of REST as applied to the web. It was explicitly designed in such a way that it was forbidden to reload parts of the page. This makes it so that every state has it's own URL and you therefore can link to every state. Deep linking if you wish, for all of the web.


The is exactly what HTMX does and it works: allow returning a HTML response and statically annotating a part of the DOM to be replaced with that.

Granted, this requires JavaScript on the front end, but it's a framework that can be shared between different websites, so you're not writing any custom JavaScript for your site.


Around 2005 was the first time I "accelerated" a website by requesting the whole page in the background and swapping out what had changed.

We eventually trimmed what was returned a bit, but just getting rid of the page transition was 90% of the work.


You can still return html partials from your endpoints and use htmx to swap content without a full reload :)


yes, but then it’s not one request per page


Only if there’s an update - htmx will encourage you to render full html pages but still gives you the ability to do in-place updates which is what most folks think a JS front end with a json Api backend and client side rendering is needed for.


HTMX can swap multiple DOM elements from a single HTTP response

https://htmx.org/extensions/multi-swap/


In that case you can just parse the DOM clientside and insert it into the page.


I think it's about API as such and not about how to serve the webpage.

No matter of JSON or HTML don't built a general purpose API if it's only your frontend.


>Why not just send HTML as a single response at this stage?

Is it time to discover memcache again? I think it's about time.


The first 3 sentences of the article are implying that the whole thing is written with "if you must" attitude.

That said, a lot of companies have front-end teams, and those front-end teams commit years to building React components. So even if you render server-side, this is how you'd put data into those components. I'm a backend dev. I don't like some of what frontend is doing, but you gotta work with people, right?


This is exactly how next.js works. But even more importantly: Why not use GraphQl in this situation? This is exactly what it excels at:

1. The backend can be entity centric, like it does best 2. The frontend can request a page worth of data from diverse entities in a single go.


Because you want to paginate. Update the page piecemeal based on an event from another user. Because you want to use the same data on multiple pages, and want to write the authorization and filtering logic once.


Got to pump CV with frameworks you know? Promotions don’t write themselves. Bonus point: job security. And when you leave who cares what poor shmuck will have to support it?


Something about injecting HTML from an API, even from a controlled server, feels wrong.


But that’s the point: its not an API, it’s just an app web server.

There’s nothing wrong with a web server serving HTML; that’s their whole purpose.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: