On This Page
Script-based prerendering feels like a cheat code for SEO.
Run your SPA once, capture the HTML, deploy it to a CDN — done. No servers. No SSR. No complexity.
But the moment your app becomes even slightly dynamic… it starts breaking in ways that aren't obvious at first.
The Core Idea (And Why It's Attractive)
Script-based prerendering works like this:
- Run your SPA in a headless browser (or build tool)
- Let all JavaScript execute
- Capture the fully rendered HTML
- Deploy that HTML as static files
So instead of serving an empty <div id="root">, your HTML already contains full page content, meta tags (OG, Twitter, schema.org), and fully resolved routes.
This solves the classic SPA SEO problem: "Bots can't see my content."
So far, so good.
Where This Starts to Break
The problem is this model treats all traffic the same. And that's where things go sideways.
1. You're Serving Prerendered Content to Everyone
These systems don't distinguish between search bots, AI crawlers, and real users. They serve the same prerendered HTML to all of them.
That sounds fine… until you think about what that HTML actually represents:
It's a snapshot of your app at a single point in time.
Not live content. Not user-specific content. Just a frozen render.
2. Personalization Breaks Completely
Modern apps rely heavily on runtime personalization:
None of that works with prerendered HTML. Because the HTML was generated before the user existed.
So you end up with logged-out views for logged-in users, generic content where personalization should exist, and hydration mismatches if the client tries to "fix" it later.
At best, it's janky. At worst, it's broken.
3. Dynamic Content Becomes Stale Instantly
This is the big one.
Any content that depends on "now" is immediately wrong:
All of that data was captured during prerender. Your site becomes a cached snapshot pretending to be a live application.
Unless you're constantly rebuilding (which introduces its own problems), your content is always drifting out of date.
What This Looks Like in Production
Here's what this actually looks like when teams use this approach:
None of these are catastrophic individually. But together, they create a system that feels… off.
And worse — hard to debug.
4. Rebuild Pressure Becomes a Scaling Problem
To compensate for stale content, teams start doing frequent rebuilds, incremental static regeneration hacks, and cron-based re-renders.
But now you've introduced build pipeline complexity, deployment lag, and race conditions between updates and renders.
You didn't avoid SSR. You rebuilt it — just without the parts that make it work.
5. Hydration Becomes Fragile
When the browser loads the prerendered HTML, your SPA still hydrates. But now there's a risk: the HTML says one thing, the JS runtime computes something else.
Especially for auth-based content, feature flags, and real-time data.
The UI you ship is no longer a single source of truth. It's a guess that gets corrected after load.
6. It Solves SEO… But Creates Product Constraints
This approach is fundamentally SEO-first. Which is fine — until it starts dictating how your product works.
You'll find yourself avoiding real-time features, personalization, and dynamic UI patterns — because they don't fit the prerender model.
That's a dangerous place to be.
Where This Approach Does Work
To be fair — this model is not useless. It works really well for:
Basically: anything that doesn't depend on user state or real-time data.
A Better Mental Model
The real issue is this: script-based prerendering tries to turn a dynamic system into a static one. But modern web apps are stateful, personalized, and frequently changing. You can't flatten that into a static snapshot without losing something.
Instead of asking:
"How do I prerender everything?"
The better question is:
"Who actually needs prerendered content?"
The answer:
Not humans.
That's the shift. Serve prerendered content to the consumers that need it, and serve the live, dynamic app to everyone else.
🔥 DataJelly vs Build-Time Prerendering
Clear winners, no ambiguity.
| Category | DataJelly (Edge Proxy + Snapshots) | Build-Time Prerendering |
|---|---|---|
| Rendering Model | Runtime (request-aware) | Frozen at build time |
| Content Accuracy | Always reflects current state | Snapshot of the past |
| Bots vs Humans | Different output per audience | Same content for everyone |
| SEO Output | Full HTML snapshots | Full HTML snapshots |
| AI Readability | Structured Markdown output | Raw HTML only |
| Personalization | Works correctly | Completely broken |
| Dynamic Content | Live (API / real-time) | Stale until rebuild |
| Hydration Behavior | No mismatch | Frequent mismatch issues |
| User Experience | True app experience | "Fake" snapshot → corrected later |
| Rebuild Pressure | None | Constant rebuilds required |
| Operational Complexity | Moderate (edge layer) | Simple (static hosting) |
| Scales with App Complexity | Yes | Breaks quickly |
| Best Fit | Real apps | Static sites only |
⚡ What This Actually Means
Prerendering
- You freeze your app at build time
- Then try to patch reality back in with hydration
DataJelly
- You keep your app dynamic
- And only optimize delivery for bots
🎯 Decision Shortcut
If your app has users, data, or frequent updates → DataJelly
If your site is basically a brochure → Prerender is fine
💥 One-Liner
Prerendering shows everyone a screenshot of your app.
DataJelly shows bots the snapshot — and users the real thing.
Final Take
Script-based prerendering is a clever workaround for SPA SEO. But it comes with tradeoffs that become very real as your app grows:
It's not wrong — it's just the wrong abstraction for modern apps.
If your site is static, this works great. If your site is an application — it will fight you the entire way.
Quick Test: What Do Bots Actually See?
Most people guess. Don't.
Run this test and look at the actual response your site returns to bots.
Fetch your page as Googlebot
Use your terminal:
curl -A "Googlebot" https://yourdomain.comLook for:
- Real visible text (not just
<div id="root">) - Meaningful content in the HTML
- Page size (should not be tiny)
Compare bot vs browser
Now test what a real browser gets:
curl -A "Mozilla/5.0" https://yourdomain.comIf these responses are different, Google is indexing a different page than your users see.
Stop guessing — measure it.
Real example: 253 words vs 13,547
We see this constantly. Here's a real example from production: Googlebot saw 253 words and 2 KB of HTML. A browser saw 13,547 words and 77.5 KB. Same URL — completely different content.

If your HTML doesn't contain the content, Google doesn't either.
Compare Googlebot vs browser on your site → HTTP Debug ToolCheck for common failure signals
We see this all the time in production:
- HTML under ~1KB → usually empty shell
- Visible text under ~200 characters → thin or missing content
- Missing <title> or <h1> → weak or broken page
- Large difference between bot vs browser HTML → rendering issue
Use the DataJelly Visibility Test (Recommended)
You can run this without touching curl. It shows you:
- Raw HTML returned to bots (Googlebot, Bing, GPTBot, etc.)
- Fully rendered browser version
- Side-by-side differences in word count, HTML size, links, and content
What this test tells you (no guessing)
After running this, you'll know:
- Whether your HTML is actually indexable
- Whether bots are seeing partial content
- Whether rendering is breaking in production
This is the difference between "I think SEO is set up" and "I know what Google is indexing."
If you don't understand why this happens, read: Why Google Can't See Your SPA
If this test fails
You have three real options:
SSR
Works if you can keep it stable in production
Prerendering
Breaks with dynamic content and scale
Edge Rendering
Reflects real production output without app changes
If you do nothing, you will not rank consistently. Learn how Edge Rendering works →
This issue doesn't show up in Lighthouse. It shows up in rankings.
Related Reading
Dynamic Rendering vs Prerendering
Understanding the key differences and when to use each.
JavaScript SEO Guide
How search engines handle JavaScript-rendered content.
SPA SEO Deep Dive
The full picture on making single page apps visible.
How Snapshots Work
The rendering pipeline behind DataJelly snapshots.
Why Google Can't See Your SPA
The rendering gap that kills search traffic.
Sitemap Exists But Google Ignores Pages
Why discovery ≠ indexing — and the rendering fix.
Snapshot Asset Test
Test how snapshot rendering handles your assets.
DataJelly Edge
Edge rendering for bot visibility — no code changes.
Prerender vs SSR vs Edge Rendering
Side-by-side comparison of what actually works for SEO in production.
Why Script Prerendering Breaks on Real Apps
The production deep-dive — 5 failure patterns that break quietly.
Curious what bots actually see on your site?
Run a free bot visibility test — compare the human view vs. what search engines and AI crawlers receive.
Run a free bot test