Prerender vs SSR vs Edge Rendering: What Actually Works for SEO
Your site loads fine. Your analytics work. Your pages exist. Google still doesn't index them. This is almost always a rendering problem — and we see it constantly.
On This Page
The Real Problem
Your site looks perfect in Chrome. But that doesn't matter for search engines. Search engines index the initial HTML response, not your hydrated React app.
We see this constantly across production sites:
That gap is why your pages don't rank. The content exists — Google just never sees it.
What's Actually Happening
There are two completely different outputs for the same page. Your browser gets one thing. Googlebot gets something else entirely.
Browser Request
Bot Request
Real example: /pricing
/pricing in browser → 1,200 words, full layout
curl -A "Googlebot" → <div id="root"></div> + scripts
→ That page will not rank. It doesn't exist from Google's perspective.
What Most Guides Get Wrong
Most SEO advice assumes JavaScript rendering is reliable. It's not.
You'll hear this advice constantly:
None of that fixes empty HTML. Here's what actually happens in production:
If your HTML response is under ~5 KB or contains no real text, you're already losing.
What We See in Production
These are repeatable, measurable failures. Not theoretical risks — things that break on real sites every week.
01Empty HTML (most common)
HTML size: 2–3 KB. Visible text: 0–50 chars. All content loaded via JS.
Result: Page not indexed or indexed as empty.
02Script shell only
10–20 <script> tags. One root div. No semantic content.
Result: Google indexes nothing meaningful. This is exactly what Guard flags as a "script shell only" failure.
03Partial render
<title> present. Body content missing. API failed during render.
Result: Page ranks for nothing — Google indexed a shell with a title.
04Deep link failure
/features works in browser. Direct request returns 404 or empty shell.
Result: Page is never indexed. It only exists via client-side navigation.
05Prerender drift
Snapshot generated at build time. Content updated after deploy. Sitemap still points to old content.
Result: Wrong content indexed. Rankings unstable. Users land on outdated pages.
06Proxy / rendering loops
Edge → origin → redirect → edge → origin. Infinite loop. This is a real failure mode we see when proxy configurations don't include loop detection.
Result: HTTP 508 or timeout. Page never renders for anyone.
Proper systems block this with loop detection headers. If yours doesn't — you'll find out in production.
07Silent content drops
No errors. No alerts. Just bad HTML. Before deploy: 110 KB HTML, 2,500 words. After deploy: 9 KB HTML, 150 words.
Result: Rankings disappear gradually. No error codes — search engines just stop showing your pages.
Guard monitors exactly this: DOM drop >50%, text drop >40%, missing title or H1.
Prerender vs SSR vs Edge Rendering
Three approaches, three very different failure profiles. Here's what actually happens with each one in production.
Prerender (Build-Time)
What actually happens:
HTML generated once during build. Static snapshot served to bots.
Where it breaks:
Real failure:
500 routes exist. 50 prerendered. 450 return JS shell. Only 10% of your site is indexable.
SSR (Server-Side Rendering)
What actually happens:
Server builds HTML per request. Bots get full content — if it's working correctly.
Where it breaks:
Real failure:
/ SSR works. /blog/* falls back to CSR. Half your site indexes, half disappears.
Edge Rendering (DataJelly Model)
What actually happens:
Edge proxy intercepts bot requests. Returns fully rendered HTML snapshot. AI bots get clean Markdown instead of HTML.
Key difference:
It does not depend on your app rendering correctly.
Real behavior:
No partial rendering. No fallback gaps.
Practical Comparison
Side-by-side — how each approach performs on the things that actually matter for SEO.
| Prerender | SSR | Edge | |
|---|---|---|---|
| HTML consistency | Depends on build coverage | Depends on routing + infra | Consistent per request |
| Route coverage | Limited to known routes | Often incomplete in real apps | All routes, including deep links |
| Content freshness | Stale until rebuild | Fresh but fragile | Fresh via snapshot pipeline |
| Failure modes | Missing pages | Inconsistent rendering | Predictable output |
| AI crawler support | No | No | Yes — Markdown output |
The Verdict
Prerender: Reliable for static pages. Unsafe for dynamic apps. If your content changes more than once a week, snapshots will drift.
SSR: Works when everything is fast. Fails unpredictably under real load. If your answer to "what happens when the API is slow?" is "it depends" — it will break.
Edge rendering: Most stable in production when properly implemented. Failures are handled at the response layer, not inside your app.
Quick Test: What Do Bots Actually See?
Most people guess. Don't.
Run this test and look at the actual response your site returns to bots.
Fetch your page as Googlebot
Use your terminal:
curl -A "Googlebot" https://yourdomain.comLook for:
- Real visible text (not just
<div id="root">) - Meaningful content in the HTML
- Page size (should not be tiny)
Compare bot vs browser
Now test what a real browser gets:
curl -A "Mozilla/5.0" https://yourdomain.comIf these responses are different, Google is indexing a different page than your users see.
Stop guessing — measure it.
Real example: 253 words vs 13,547
We see this constantly. Here's a real example from production: Googlebot saw 253 words and 2 KB of HTML. A browser saw 13,547 words and 77.5 KB. Same URL — completely different content.

If your HTML doesn't contain the content, Google doesn't either.
Compare Googlebot vs browser on your site → HTTP Debug ToolCheck for common failure signals
We see this all the time in production:
- HTML under ~1KB → usually empty shell
- Visible text under ~200 characters → thin or missing content
- Missing <title> or <h1> → weak or broken page
- Large difference between bot vs browser HTML → rendering issue
Use the DataJelly Visibility Test (Recommended)
You can run this without touching curl. It shows you:
- Raw HTML returned to bots (Googlebot, Bing, GPTBot, etc.)
- Fully rendered browser version
- Side-by-side differences in word count, HTML size, links, and content
What this test tells you (no guessing)
After running this, you'll know:
- Whether your HTML is actually indexable
- Whether bots are seeing partial content
- Whether rendering is breaking in production
This is the difference between "I think SEO is set up" and "I know what Google is indexing."
If you don't understand why this happens, read: Why Google Can't See Your SPA
If this test fails
You have three real options:
SSR
Works if you can keep it stable in production
Prerendering
Breaks with dynamic content and scale
Edge Rendering
Reflects real production output without app changes
If you do nothing, you will not rank consistently. Learn how Edge Rendering works →
This issue doesn't show up in Lighthouse. It shows up in rankings.
Practical Checklist
You don't need tools. Just test the response. These six checks catch 95% of rendering failures.
1. Check HTML Size
curl -A "Googlebot" https://yoursite.com/page
2. Check Real Content
Search the response for:
If you only see scripts → it's not indexable.
3. Check Deep Routes
Test your key pages directly:
If any return 404 or empty HTML → that route won't rank.
4. Check Stability Over Time
Things break after deploy:
Guard exists specifically to catch these failures continuously.
5. Measure Content Density
Count visible text in the raw HTML response.
Large HTML size with low text content is a script shell — lots of JavaScript, no real content.
6. Simulate Failure
Break things on purpose and observe what bots get:
If the HTML degrades → your rendering system is fragile. Edge rendering doesn't degrade because it serves pre-built snapshots.
Where DataJelly Fits
DataJelly fixes the actual failure point: what bots receive.
Edge proxy serves full HTML snapshots to search bots
Every request returns complete, rendered content — not a JavaScript shell.
AI crawlers receive structured Markdown
GPTBot, ClaudeBot, PerplexityBot get clean, parseable content.
Works with React, Vite, Lovable — without changes
No framework migration. No code changes. No build pipeline modifications.
It removes dependency on client-side rendering, framework correctness, and build-time coverage.
Result: Every bot request returns real content, not a JavaScript shell.
Stop guessing. See what bots actually see.
Run a free visibility test on your site - or start a 7-day free trial to fix rendering across all your pages.
The Bottom Line
If your SEO depends on JavaScript executing successfully, it will fail. Not sometimes — eventually.
The only thing that matters: what HTML is returned on the first request.
If that HTML is thin, empty, or inconsistent — your SEO is broken.