you don't have a rendering problem. you have a choosing problem.
There are two kinds of web applications. The first has content that is mostly static - pages, posts, product listings, documentation, anything where the HTML is roughly the same for every user and freshness matters less than availability. The second has high interactivity - dashboards, editors, real-time collaboration, drag-and-drop, optimistic UI, state that lives and breathes between user actions.
These two things have very different rendering requirements. The first wants to be server-rendered. The second wants to be client-rendered. The industry spent fifteen years building frameworks that promise to do both simultaneously, at maximum complexity, and charging you the cost of both without giving you the full benefit of either.
HTMX still has a place here: it can refresh fragments when interaction needs motion. It does not need to own routing, state, templates, and my remaining patience. The question is whether you reached for HTMX because it was the right tool or because someone on Twitter called it "the future" and it felt contrarian and interesting. I ask because I did the second one. I am not proud.
// the lie: "just add ssr to your spa"
Somewhere around 2019, the JavaScript ecosystem collectively decided that the problem with SPAs - slow initial load, bad SEO, empty HTML shells that crawlers hate - could be solved by also rendering the SPA on the server. This sounds reasonable. It is, in practice, a horror.
The promise is: your React app renders to HTML on the server for the first request (fast, SEO-friendly, beautiful), then the client downloads the JS bundle and "hydrates" - attaching event listeners to the server-rendered DOM and taking over as a full SPA. Best of both worlds. The marketing materials are gorgeous.
The server sends 14KB of HTML. The browser renders it instantly. The user sees the page. THEY CANNOT CLICK ANYTHING. The 280KB JavaScript bundle is still downloading. The bundle executes. React mounts. React looks at the DOM and says "yes this looks right" and begins reconciling the virtual DOM with the real DOM. During this process, WHICH CAN TAKE 3-6 SECONDS ON A BUDGET PHONE, the user is staring at a fully-rendered page that does not respond to input. This is called "Time to Interactive" and it is a metric that exists specifically to measure this specific form of suffering. You have achieved the aesthetic of performance without the reality of it. Congratulations.
This is the fundamental lie of "isomorphic" or "universal" rendering. You are not getting SSR and SPA. You are getting the complexity of both - server infrastructure that can run Node, a build system that compiles for two targets, hydration logic that can and does produce mismatches, a JavaScript bundle that the user must download anyway - plus the performance benefit of neither, because your Largest Contentful Paint is fast but your Time to Interactive is a crime scene.
// the trade-off that was there the whole time
There is no free lunch. There is no framework that collapses the fundamental trade-off between content delivery and interactive applications. There is only the choice of which trade-off you are making, and whether you are making it consciously or whether Next.js is making it for you while you stare at a Lighthouse score.
| YOUR ACTUAL SITUATION | WHAT YOU NEED | WHAT PEOPLE REACH FOR | OUTCOME |
|---|---|---|---|
| Blog, docs, marketing site, product page | HTML. Maybe a template engine. Done. | Next.js with ISR, React Server Components, edge middleware | $200/mo Vercel bill. 400ms TTFB. Team of one. Haunted. |
| E-commerce with product listings + cart | SSR for product pages. A bit of JS for the cart. Boring and correct. | Full SPA with SSR hydration + client-side cart state + GraphQL | Hydration mismatch on cart count. Checkout broken in Safari. 6-hour debug session. |
| SaaS dashboard with real-time data, filters, live updates | SPA. React or Vue. Client-side state. WebSockets. | Next.js App Router, Server Actions, Suspense boundaries, useTransition | Works technically. Nobody on the team understands the data flow. New hire cried. |
| Collaborative editor (Figma/Notion-style) | SPA. No question. Full client ownership. CRDTs if you're serious. | Someone will try to SSR this. They will not succeed. They will not stop trying. | Six months. No product. Many blog posts about Operational Transform. |
| Simple CRUD app, internal tool, admin panel | Server-rendered HTML. PHP, Rails, Django, Slim, whatever. Add HTMX sparingly. | React frontend + REST API + JWT auth + React Query + Zustand | Three-month project becomes eight months. Ironically less interactive than a 2003 PHP app. |
// htmx: a confessional
I used HTMX. I want to talk about why, because it was not a considered technical decision so much as a specific kind of boredom that afflicts developers who have been doing the same thing for too long.
The pitch for HTMX is seductive: you keep your server-rendered HTML, you just add hx-get and hx-swap attributes to your elements, and the server returns HTML fragments that get swapped into the DOM without a full page reload. No JavaScript bundle. No virtual DOM. No state management. It is genuinely clever.
The reality is that it works beautifully for exactly what it says it does - fragment refreshes, inline form submissions, partial page updates - and becomes immediately painful the moment you try to use it for things it was not designed for, which is what you will inevitably do because humans cannot leave things alone.
// this is the correct reaction. you are right.
week 2: let me add hx-push-url so URLs update on fragment swap
// fine. still ok. this is supported.
week 3: let me add Alpine.js for the dropdown state
// you have added a second framework. reflect on this.
week 4: this modal needs to know about state from three other components
// you are building a SPA with extra steps and no tooling.
week 6: i have written 400 lines of _hyperscript. what have i done.
// go back. choose your lane. this is the path of suffering.
HTMX is not bad software. It is correctly scoped software used incorrectly by people (me) who wanted the simplicity of server-rendering and the interactivity of an SPA simultaneously, which is the exact trade-off that cannot be collapsed. HTMX is the right tool for: sprinkled interactions on server-rendered pages, inline edits, live search, partial refreshes. It is the wrong tool for: applications where client-side state is the product.
// the actual decision matrix. one page. use it.
Two questions. That's it. Answer both honestly and the architecture reveals itself:
CONTENT: blog posts, product pages, docs, listings, articles, static data
- use server rendering. twig, erb, blade, jinja, doesn't matter.
- add js for enhancement only (forms, toasts, modals, search).
- htmx is fine here. vanilla js is fine here. alpine is fine here.
INTERACTION: editor, dashboard, real-time, drag/drop, optimistic UI
- use a SPA. react, vue, svelte, pick one, commit.
- your server is an API. your client owns the UI. that's the deal.
- do not SSR this. do not hydrate this. just ship the JS.
Q2: Did you answer "both" to Q1?
- you have a mixed app. split it. content routes = SSR. app routes = SPA.
- this is fine and normal. wordpress.com does this. github does this.
- what you should NOT do: force one framework to pretend it's both.
"But we need SSR for SEO." This is the incantation used to justify bolting server infrastructure onto applications that are fundamentally interactive. Here is the thing: if your SaaS dashboard is behind a login, GOOGLE CANNOT SEE IT ANYWAY. Googlebot does not have your credentials. You do not need to SSR the authenticated interior of your application for SEO. You need to SSR your landing page, your marketing site, your pricing page, your blog. Those are almost certainly not the pages where you are fighting with hydration mismatches at 2am. Stop conflating "our marketing site needs good SEO" with "our entire application must be server-rendered." THEY ARE DIFFERENT APPS. Make them different apps. This is allowed. There is no law.
// verdict. for the last time.
The framework is not the problem. The meta-framework that promises to solve the tension between two fundamentally different rendering models is the problem. It is not that Next.js or Nuxt or SvelteKit are bad - they are well-engineered tools with clear use cases. The problem is reaching for them as a default, before you have honestly answered whether your application is primarily a content delivery system or a client-side application.
Most applications are one or the other. Most content sites do not need a client runtime. Most interactive apps do not benefit from SSR. The ones that genuinely need both are complex, high-traffic, well-staffed products where the engineering cost of the hybrid approach is justified by real product requirements. That is a small and specific category. Most of us are not in it.
If your content is mostly static: server-render it. PHP, Python, Ruby, Go - whatever runs on your VPS without drama. Sprinkle JavaScript where interaction actually earns it. Your users get fast pages. Your server bill is low. Your architecture fits in a napkin.
If your app is genuinely interactive: write a SPA. Embrace the client. Return JSON from your API. Let your JavaScript own the UI. Accept that the first load ships a bundle. Invest in a good loading experience. Move on.
If you answered "both": split by route. Marketing and content on SSR. App behind auth as a SPA. Different repos if necessary. This is not defeat. This is clarity.
The trade-off does not disappear because you named your file page.server.tsx. It just hides in the build config until 11pm on a Wednesday.
// filed under: things i should have decided before writing code · htmx still installed, hasn't caused problems yet · this page is server-rendered html. no js bundle. take notes.