DJ
DataJelly
Visibility Test
EdgeGuard
PricingSEO ToolsGuidesGet Started
Dashboard
Back to Blog
Blog
Edge
April 2026

Why "Google Renders JavaScript" Is Misleading

A team ships a React marketing site. Lighthouse is green. Two weeks later, organic traffic drops 60%. Search Console says "Discovered – currently not indexed." URL Inspection returns 200. The actual HTML is 6 KB with ~40 words and 18 script tags. Google indexed exactly what it received: an empty page. We see this constantly.

Reading progress0%

On This Page

The Real Problem

"Google renders JavaScript" gets repeated as if rendering is part of the primary crawl. It isn't. Rendering is a separate, deferred, deprioritized step. Treating it as guaranteed is how teams ship sites that look healthy and rank for nothing.

A real failing page we audited:

  • • HTML size: 6 KB
  • • Visible text: ~40 words
  • • DOM: single root <div id="root"> + 18 script tags
  • • Headings, links, structured data: none
  • • Search Console: "Discovered – currently not indexed"

Lighthouse 98. Browser perfect. Index empty.

How Google Actually Renders (Two Stages)

Google does not render your site the way a browser does. It runs a two-step pipeline:

  • 1Crawl raw HTML — evaluated immediately. This is what gets indexed first.
  • 2Render later — optional, delayed, and unreliable. Can take minutes to days. Can be skipped entirely.

Stage 1 — HTML crawl (this is what matters)

Googlebot fetches the page and evaluates the raw response immediately. A typical failing SPA looks like this:

<!doctype html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<title>Acme — modern SaaS</title>
</head>
<body>
<div id="root"></div>
<script type="module" src="/assets/index-a3f7.js"></script>
<script src="https://cdn.example.com/analytics.js"></script>
<!-- ...15 more script tags... -->
</body>
</html>

That document is 5 KB, has ~30 visible words, no headings, no links, no structured data. Google considers it thin and often never indexes it. This is the same shape we cover in Script Shell Pages and Your HTML Is Only 4KB.

Stage 2 — JavaScript rendering (best effort only)

Google may queue your page for rendering. When it does:

  • Latency ranges from minutes to days
  • It can be skipped entirely under crawler load
  • The render environment is constrained (timeouts, blocked third-parties)
  • If render fails, Google falls back to the original HTML

There is no retry. No alert. No guarantee. The only signal is your traffic going down weeks later.

What Most Guides Get Wrong

The standard line is: "Google can render JavaScript, so SPAs are fine." That sentence ignores how the system actually behaves in production.

"Google renders JS, so don't worry about your HTML."

Rendering is a separate, deferred queue. Your HTML is what gets indexed first — and often only.

"Just submit a sitemap."

A sitemap tells Google which URLs exist. It says nothing about whether the HTML at those URLs is renderable or complete.

"Use prerendering everywhere."

Prerendering trades 'empty' for 'stale.' Without strict invalidation, you ship outdated pricing and missing routes for weeks.

You're relying on a secondary system that is slower, less reliable, and often skipped. That's not a strategy. That's a gamble. And it ignores AI crawlers entirely — see How AI Crawlers Read Your Website for why GPTBot and friends don't run JS at all.

What We See in Production

Five repeatable failure patterns. We see all of them, every week, across React, Vite, and Lovable apps.

1

Empty shell pages

Shape: 5 KB HTML, 30–80 visible words, 20+ script tags.

Outcome: Never indexed. The most common failure mode for React/Vite/Lovable SPAs.

2

Partial render

Shape: Header and footer render. Main content (pricing, product details, FAQs) is missing because it loads from an async API.

Outcome: Page indexes, but ranks for nothing meaningful. You "exist" in Google but get no traffic — see Indexed But No Traffic.

3

JS execution failure

Shape: Main bundle 404s on a stale CDN URL, a third-party script blocks main(), or hydration throws on first paint.

Outcome: HTML stays empty forever. We dig into this in Critical JavaScript Failures.

4

Rendering never happens

Shape: Site has 5,000+ pages. Only the top ~200 get queued for render. The long tail stays as raw HTML.

Outcome: Crawl budget interacts with render cost. Low-priority routes are invisible.

5

Stale or inconsistent render

Shape: Google rendered an old JS bundle weeks ago. Cached API responses don't match what's live.

Outcome: Indexed content doesn't match reality. Pricing in SERPs is wrong for weeks.

Prerender vs SSR vs Edge

Three real ways out. Each has tradeoffs.

ApproachWorks whenBreaks when
PrerenderingStatic pages, known routes, infrequent updatesContent changes often, invalidation misses, route count grows
SSRYou can absorb latency cost and run rendering infraHigher TTFB, infra complexity, hot path scales with traffic
Edge proxy (DataJelly)You want bots to see fully-rendered HTML without changing your SPANo long-lived snapshot cache → no drift, AI crawlers get clean Markdown

Deeper breakdown: Prerender vs SSR vs Edge Rendering and The Hidden Costs of Prerendering.

Why edge removes the gamble

  • • Fully-rendered HTML delivered at request time — no render queue dependency
  • • AI crawlers receive clean structured Markdown
  • • Humans get the live SPA, untouched
  • • Works with React, Vite, and Lovable apps without rewrites

How to Detect It (Quick Test)

Stop guessing what Google sees. Fetch your page as Googlebot and look at the raw response. If it's a 6 KB shell, that's exactly what got indexed.

Quick Test: What Do Bots Actually See?

~30 seconds

Most people guess. Don't.

Run this test and look at the actual response your site returns to bots.

1

Fetch your page as Googlebot

Use your terminal:

curl -A "Googlebot" https://yourdomain.com

Look for:

  • Real visible text (not just <div id="root">)
  • Meaningful content in the HTML
  • Page size (should not be tiny)
2

Compare bot vs browser

Now test what a real browser gets:

curl -A "Mozilla/5.0" https://yourdomain.com

If these responses are different, Google is indexing a different page than your users see.

Stop guessing — measure it.

Real example: 253 words vs 13,547

We see this constantly. Here's a real example from production: Googlebot saw 253 words and 2 KB of HTML. A browser saw 13,547 words and 77.5 KB. Same URL — completely different content.

Bot vs browser comparison showing 253 words for Googlebot vs 13,547 words for a rendered browser on the same URL

If your HTML doesn't contain the content, Google doesn't either.

Compare Googlebot vs browser on your site → HTTP Debug Tool
3

Check for common failure signals

We see this all the time in production:

  • HTML under ~1KB → usually empty shell
  • Visible text under ~200 characters → thin or missing content
  • Missing <title> or <h1> → weak or broken page
  • Large difference between bot vs browser HTML → rendering issue

Use the DataJelly Visibility Test (Recommended)

You can run this without touching curl. It shows you:

  • Raw HTML returned to bots (Googlebot, Bing, GPTBot, etc.)
  • Fully rendered browser version
  • Side-by-side differences in word count, HTML size, links, and content
Run Visibility Test — Free

What this test tells you (no guessing)

After running this, you'll know:

  • Whether your HTML is actually indexable
  • Whether bots are seeing partial content
  • Whether rendering is breaking in production

This is the difference between "I think SEO is set up" and "I know what Google is indexing."

If you don't understand why this happens, read: Why Google Can't See Your SPA

If this test fails

You have three real options:

SSR

Works if you can keep it stable in production

Prerendering

Breaks with dynamic content and scale

Edge Rendering

Reflects real production output without app changes

If you do nothing, you will not rank consistently. Learn how Edge Rendering works →

This issue doesn't show up in Lighthouse. It shows up in rankings.

Run the TestAsk a Question

Practical Checklist

1. Check raw HTML (this is the truth)

curl -A "Googlebot" https://your-site.com/page | wc -c
curl -A "Googlebot" https://your-site.com/page | grep -oE '\b\w+\b' | wc -l

Look for: HTML < 10 KB → failure risk. Visible text < 200 words → failure. Mostly script tags → failure.

2. Compare source vs DOM

View page source (raw HTML) and compare to the rendered DOM in DevTools. Anything that only exists in DevTools, Google likely doesn't see during the initial crawl.

3. Measure content density

Healthy:   15–100 KB HTML   |   500–2000+ words
Broken:    < 10 KB HTML    |   < 150 words

4. Check Search Console signals

  • • "Discovered – currently not indexed" → empty HTML at crawl time
  • • "Crawled – currently not indexed" → partial or failed render. See Crawled Not Indexed.

5. Check runtime failures

Console errors, failed JS/CSS requests, third-party timeouts — all directly correlate with render failures. See Critical JavaScript Failures.

Want this automated? The Page Validator and HTTP Bot Comparison tool run all of these for you.

Real Thresholds

From the SPAs we audit week to week. These are not opinions — they're the actual breakpoints between "indexed" and "invisible."

MetricHealthyRiskBroken
HTML size15–100 KB10–15 KB< 10 KB
Visible words500–2000+150–500< 150
DOM headings= 3 (incl. H1)1–20
Internal links in HTML= 103–90–2

"Google renders JavaScript" is technically true and practically misleading.

Google indexes what it sees first: your raw HTML. If your HTML is empty, your page is invisible — regardless of how well your app renders in the browser. Stop relying on deferred rendering. Serve complete HTML at crawl time, or accept unstable indexing.

The DataJelly Approach

DataJelly Edge fixes this at the infrastructure level. The edge proxy serves fully-rendered HTML snapshots to bots, AI crawlers receive clean Markdown, and humans get the live SPA. You control what bots see — instead of relying on Google's render queue.

  • Fully rendered HTML at crawl time — no render queue gamble
  • AI Markdown for GPTBot, ClaudeBot, PerplexityBot
  • Works with React, Vite, and Lovable SPAs — no rewrites
  • Eliminates empty HTML responses at crawl time

Deeper read: How Edge works →

Run the Visibility Test — FreeStart 7-Day Free TrialAsk a Question

Related Diagnostic Tools

Visibility Test

Compare bot vs browser HTML side-by-side

Page Validator

Check bot-readiness and HTML completeness

HTTP Bot Comparison

Compare Googlebot vs browser responses

Site Crawler

Audit HTML quality across all routes

FAQ

Related Reading

What AI Crawlers Actually Extract From Your Site

GPTBot, ClaudeBot, PerplexityBot don't render JS at all. Here's what they pull from your HTML.

The Hidden Costs of Prerendering

Stale snapshots, broken invalidation, snapshot drift. Why prerendering trades one problem for another.

Script Shell Pages

When your HTML is one div and 15 script tags, this is what bots see.

Your HTML Is Only 4KB

Why a 4KB shell is not SEO-compatible — and what a healthy response looks like.

Why Google Can't See Your SPA

The full picture of how SPAs fail in production crawling.

Prerender vs SSR vs Edge Rendering

What actually works for SEO with real production data.

Crawled But Not Indexed

What the Search Console label really means — and how to fix it.

Indexed But No Traffic

Indexed pages that rank for nothing — usually a partial-render problem.

Reading progress0%

On This Page

DataJelly

SEO snapshots for modern SPAs. Making JavaScript applications search engine friendly with enterprise-grade reliability.

Product

  • DataJelly Edge
  • DataJelly Guard
  • Pricing
  • SEO Tools
  • Visibility Test
  • Dashboard

Resources

  • Blog
  • Guides
  • Getting Started
  • Prerendering
  • SPA SEO Guide

Company

  • About Us
  • Contact
  • Terms of Service
  • Privacy Policy

© 2026 DataJelly. All rights reserved. Built with love for the modern web.