Back to Blog
Blog
Guard
Guard
April 2026

Why Your React App Shows a Blank Page in Production

A production React app can return 200 OK and still serve an empty root div. Here is how to prove, detect, and fix blank-page failures.

react
production-debugging
seo
rendering
monitoring
caching

Real failure

The page returned 200. Deploy checks passed. Error rate stayed flat. But the HTML body was basically empty: 1.3 KB on a route that normally shipped 64-90 KB, visible text fell to 17 characters, and users got a white screen where the product grid should have been.

Capture

Raw HTML served to Googlebot on the broken release

URL

/store

HTML size

1.34 KB

Visible text

17

Word count

3

curl -A "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" -s https://example.com/store | wc -c

1372

<body>
  <div id="root"></div>
  <script src="/assets/main.8f3c1.js"></script>
</body>
Capture

Same URL before deploy rollback

URL

/store

HTML size

68.7 KB

Visible text

1,486

Word count

247

The previous release returned server-rendered category markup, 26 product names, price text, shipping copy, FAQ snippets, and JSON-LD. The browser and raw HTML matched closely enough that Google could extract content without waiting on JavaScript.
Capture

Monitoring looked fine while the page was blank

URL

/store

TTFB

184 ms

FCP

0.6 s

Visible text

17

Synthetic monitoring reported a fast first paint. The problem was that the first paint was only the page background and header shell. In RUM, LCP disappeared on some sessions because no meaningful element rendered.

What Google saw

Request

Raw fetch test

Command

curl -A "Googlebot" https://example.com/store

Status

200

HTML

1.34 KB

Visible text

17 chars

Result:
- body contains an empty root node
- product grid is missing
- CTA copy is missing
- structured data is missing

Conclusion:
Google fetched a successful response, but the response did not contain the content the page was supposed to rank for.

What changed

Diff

HTML payload size before and after deploy

Before

  • HTML: 68.7 KB
  • Content: SSR product markup, meta tags, and structured data

After

  • HTML: 1.34 KB
  • Content: empty root and scripts only

Change

  • HTML down about 98%
  • browser could still load JavaScript later
  • crawlers and impatient users got almost nothing
Diff

Visible text and word count

Before

  • Visible text: 1,486 chars
  • Word count: about 247

After

  • Visible text: 17 chars
  • Word count: 3

Change

  • Google had the URL, but not the useful content
  • query coverage collapsed because the page stopped containing the terms it used to rank for
Diff

Organic impact over the next 72 hours

Before

  • Impressions: 6,420/day
  • Organic sessions: 1,180/day

After

  • Impressions: 3,940/day
  • Organic sessions: 711/day

Change

  • impressions down about 38.6%
  • sessions down about 39.7%
  • revenue from the affected template down about 19% by day 4

What Google did

Search Console

Search Console outcome

Status

Crawled - not indexed / indexed but not ranking

Affected pages

affected React store template

Trend

impressions down 22-41% within 48-96 hours

Interpretation: Googlebot fetched a 200 page with weak HTML, almost no body text, and missing structured data. Some URLs moved toward "Crawled - not indexed" after recrawl. Others stayed indexed but stopped ranking for useful queries because Google had the URL without the content.

Why it happened

The useful insight was not "React is slow." React was not the root problem. The page was fast enough to load, but not complete enough to rank or convert.

Critical content never made it into the initial HTML, or hydration wiped what little was there. Status codes, TTFB, and CDN hits only proved that a route was reachable. They did not prove that the route returned meaningful content.

Signals that prove the page changed

200 OK

HTTP status

can still be broken

68.7 KB -> 1.34 KB

HTML size

content dropped

1,486 -> 17

Visible text

bots saw almost nothing

1,180/day -> 711/day

Organic sessions

business impact followed

Detection signals

  • HTML size under about 2 KB on pages that usually return 40-100 KB
  • Visible text under 50 characters on a route that should have hundreds or thousands
  • Raw HTML missing product grid, article body, price text, or CTA markup
  • Console logs show Hydration failed or Text content did not match server-rendered HTML on first paint
  • Browser renders a header shell or background only, with no meaningful LCP element
  • Googlebot HTML differs from the full browser DOM after scripts run
  • Edge response has x-cache: HIT while origin body and ETag are different
  • Search Console impressions drop within 2-5 days after deploy even though status codes remain 200
Signal Healthy Suspect Broken
html_size > 50KB 15-50KB < 15KB
rendered_text_length > 1,000 200-1,000 < 200
resource_error_count 0 1-2 3+
console_error_count 0 1 warning new uncaught error

Real failure patterns

1

Performance-shaped failure: bundle arrives too late

Cause:

  • a chunk split changed the main route payload from 412 KB gzipped to about 1.7 MB transferred with vendor code included

Measurable impact:

  • 95th percentile JavaScript download jumped from about 1.1 seconds to multiple seconds
  • users saw a shell-only layout for 8-14 seconds
  • hydration errors and timeouts appeared in console logs

Outcome:

  • users and crawlers hit a page that looked alive technically but had no useful content in the useful window
2

Content failure: SSR guard returned nothing for some requests

Cause:

  • a server-side auth check treated a harmless cookie as a block condition and skipped the render path

Measurable impact:

  • about 13.8% of requests returned around 1.6 KB of HTML instead of 70+ KB
  • visible text dropped under 30 characters on affected routes
  • origin and edge bodies differed during deploy

Outcome:

  • the server returned 200 and emitted the app shell, which made uptime checks pass while the page was blank
3

Cache failure: CDN kept serving a stale shell

Cause:

  • an edge cache key ignored the deployment version header during a partial release

Measurable impact:

  • edge returned x-cache: HIT while origin size and ETag changed
  • cache hit ratio on the bad object reached 81.3%
  • the stale shell replayed for 46 hours after origin recovered

Outcome:

  • support tickets rose and Search Console clicks dropped while engineering saw a healthy origin response

Tests

1

Fetch the page as Googlebot and inspect the body

What to look for: raw HTML contains meaningful content such as product list, article body, prices, or CTA copy.

Failure signal: response body is about 1-2 KB and contains only shell markup such as <div id="root"></div>.

2

Compare edge and origin output

What to look for: Content-Length, ETag, cache headers, and the first 30 lines should match closely.

Failure signal: origin returns full SSR HTML while CDN returns a tiny cached shell.

3

Render with console logging

What to look for: no hydration errors and meaningful DOM content within 5 seconds under throttled conditions.

Failure signal: hydration errors in the console and an empty root after 5 seconds.

4

Strip tags and count visible characters

What to look for: visible characters align with baseline. Content pages usually have hundreds or thousands of visible characters.

Failure signal: 20-40 visible characters and fewer than 10 words.

5

Diff current release HTML against the previous release

What to look for: server-rendered sections remain present.

Failure signal: previous release returns about 74 KB and many product cards while the current release returns about 1.4 KB and zero cards.

6

Throttle to Slow 4G and observe content arrival

What to look for: meaningful content appears within 8 seconds for acceptable UX.

Failure signal: only the shell paints, and content appears after 12-20 seconds or never.

Detection checklist

Page checks that matter

Measure HTML payload size per critical URL and alert when it drops more than 75% from the 7-day baseline.
Extract visible text from raw HTML and fail checks if content pages fall below 120 characters.
Verify critical selectors exist in initial HTML: #product-list, .article-body, .price, and primary CTA.
Compare CDN edge and origin response body length, ETag, and cache headers on every deploy.
Fetch pages as Googlebot and as a normal browser user agent, then diff the raw HTML.
Track 95th percentile route JavaScript transfer time and alert if it jumps above 4.5 seconds.
Collect hydration and render errors from RUM with route-level grouping.
Watch Search Console for states like "Crawled - not indexed" and sudden click or impression drops.
Store a daily HTML snapshot for top templates so you can diff real output, not just source code.

Fix

Fail closed with server-rendered fallback content

If SSR cannot produce the route, do not return a silent empty shell with 200. Return a real error page or a minimal server-rendered fallback with explanatory text, canonical handling, and noindex if needed.

Use this when: SSR may fail for a subset of requests due to auth, third-party failures, or feature flags.

Add content-aware deployment gates

Block release if HTML size, visible text, or required selectors fall outside baseline bands. Example: category pages must return at least 20 KB HTML, 400+ visible characters, and a product container in raw HTML.

Use this when: automating CI/CD checks to prevent content-loss releases.

Make edge and origin output provably consistent

Invalidate CDN on deploy, include deploy version in the cache key when needed, and run an automated edge-vs-origin diff.

Use this when: you rely on edge caching and need to avoid stale shell responses.

Reduce shell dependency for primary content

Put critical copy, product names, prices, headings, and structured data in the initial HTML. Progressive hydration is fine. Critical content should not depend on full JavaScript execution.

Use this when: you want content to rank and convert without waiting for full client-side rendering.

Capture Google-seen HTML in production

Store a small sample of raw HTML fetched with Googlebot user agent after each deploy.

Use this when: you need an auditable "what Google saw" record to triage SEO incidents.

Tie SEO and conversion alerts to content loss

Set guardrails that connect content disappearance with business impact. If visible text drops 90% and sessions or conversions fall 15-30% within a day, escalate as a production incident.

Use this when: you want detection to map directly to business impact instead of ops noise.

i

How DataJelly Guard fits

DataJelly Guard is built for this exact failure class: production pages that return 200 but lose the content search engines and users need.

It compares live rendered output, raw HTML, visible text, critical selectors, and deploy-time drift so blank React pages show up as content regressions instead of vague performance noise.

i

Investigate content regressions now

Run a quick Googlebot fetch, compare edge vs origin, and validate visible text to stop indexing damage.

Run Guard checks

Final takeaway

The page was not down. That was the trap. It returned 200, loaded quickly enough to satisfy shallow monitoring, and still failed because the content disappeared. Once you inspect the raw HTML, the incident stops looking like performance and starts looking like missing output.

Quick Check: Could Guard Catch This?

For production failures, the important question is not whether the server responded. It is whether the page still works after render.

  • Does the page still contain the expected visible content?
  • Did the title, canonical, robots directives, or Open Graph tags change?
  • Did any critical script, image, or stylesheet fail?
  • Does the page visually differ from the previous known-good version?

Want to catch this before users do?

DataJelly Guard monitors production pages for silent frontend failures, broken rendering, missing SEO signals, and visual regressions.

Related Diagnostic Tools

FAQ