Skip to content
serp-monitoringseogoogle-localus-proxies2026

US SERP personalization: how much does the exit ZIP actually matter?

Measured the same Google SERP from 50 US ZIPs across 20 DMAs. Local pack ranks move, retail and services tabs differ, organic rank shifts are smaller than expected. Here's what we saw in 2026.

· Lena Ortiz · 6 min read

Setup and caveats

This post measures how Google US SERP responses change across US exits rotating through Proxaro's residential pool. The measurement is:

  • 50 US ZIP codes selected across our 20-state anchor set.
  • 20 DMAs including the top 10 plus 10 secondary markets.
  • Residential exits only, Comcast and Spectrum.
  • Query set of 40 Google queries: 10 local-services, 10 retail product, 10 branded, 10 navigational.
  • Measurement window: March 15 – April 10, 2026.
  • No signed-in user; fresh cookie jar per request; per-request rotation.
  • SERPs captured in Chromium-headless; positions scored relative to the organic top 10 + local pack + shopping pack.

Caveats: this is not a peer-reviewed study. Google's SERP personalization includes IP geo, signed-in history, previous queries in session, interface language, time-of-day, and ML-driven relevance signals. We stripped every knob except IP geo, so what we measured is "how much does IP geo alone move SERP?" — which is a subset of total personalization.

The short version

  • Local Pack (the map-card for local services) moves heavily per ZIP. This is the biggest IP-geo signal on SERP and the most operationally relevant.
  • Retail-category SERPs (shopping tab, product-listing pages) move meaningfully per state and DMA, less per ZIP.
  • Organic top 10 for most queries moves surprisingly little per ZIP on generic / broad-intent queries. Shifts under 2 positions on the top 10 for 70% of queries.
  • Branded navigational queries are nearly identical everywhere. If you're chasing branded navigational SERP shifts, IP geo isn't your signal.

The Local Pack pattern

For a query like "best dentist near me" from different ZIPs:

  • NYC 10019 (Midtown Manhattan) vs NYC 10075 (Upper East Side) — 7 of 10 listings in the local pack differ. The radius Google pulls from is smaller than the DMA.
  • LA 90026 (Echo Park) vs LA 90210 (Beverly Hills) — 8 of 10 differ. Same story.
  • Chicago 60614 (Lincoln Park) vs Chicago 60607 (West Loop) — 6 of 10 differ.

This is the Google Local Pack operating at ZIP / neighborhood resolution, not DMA resolution. If your workflow is local-SEO monitoring or local-services competitive intelligence, the exit ZIP has to match the client ZIP — nothing coarser works.

For this class of workflow:

  • Use residential with per-request rotation.
  • Pin ZIP (or at least city + the closest city we carry to the client's location) via Proxaro's city-level targeting.
  • Don't mix mobile into the local-services rotation; Google's local results skew mobile-friendly-biased on mobile exits and may rank mobile-optimized listings higher.

The retail / shopping pattern

For a query like "running shoes men" measured across DMAs:

  • NYC DMA 501 vs LA DMA 803 — shopping tab differs in 4 of the top 10 product tiles. The differences are mostly price (LA shows West Coast-fulfillment prices) and availability (some SKUs stock-gated by region).
  • Chicago DMA 602 vs Dallas DMA 623 — shopping tab shifts in 3 of the top 10. Similar pattern.
  • Within-state, across-metro shifts (e.g., Houston DMA 618 vs Dallas DMA 623) are smaller but non-zero — typically 1-2 products move.

This is DMA-resolution plus some state-level weighting. For e-commerce price-intel workflows:

  • DMA-level rotation is sufficient; ZIP-level rotation is overkill.
  • Rotate across a sample of DMAs representative of your target customer distribution (not all 210).
  • The coarser "state" rotation misses too much; DMA is the right grain.

The organic-top-10 pattern (the surprise)

The surprising result: for generic-intent queries ("best running shoes," "how to install a faucet," "python regex tutorial"), the organic top 10 is nearly identical across US DMAs. We saw average rank-position shifts of 0.3 to 1.7 per query, across DMAs.

That's lower than we expected going in. The working theory:

  • Google's organic ranking is heavily ML-driven and weighted toward the global signal for a query.
  • IP geo is one signal among dozens, and for broad-intent queries it doesn't overwhelm the others.
  • Local intent gets routed to the Local Pack rather than re-sorting the organic 10.

The exception: queries with explicit local intent ("plumber in Queens," "coffee shop near me"). Those show >3-position shifts routinely, but those are really Local Pack queries with organic fallback — the Pack is the first-class signal.

The branded-navigational pattern

Queries like "Amazon" or "Netflix" — same top 10 everywhere, same rich results, same Knowledge Graph. IP geo doesn't meaningfully affect branded navigational.

If you're doing branded SEO monitoring — tracking a specific brand's SERP for brand-relevance queries — IP geo is a secondary signal at best. You're better off investing your rotation budget in query diversity and temporal sampling.

The mobile SERP divergence

Running the same query set through mobile exits (T-Mobile carrier 4G, Verizon carrier 4G) produces different SERPs:

  • Mobile-optimized organic results rank higher (Core Web Vitals + mobile-first indexing interaction).
  • Local Pack tightens: Google tends to show the top 3 map tiles on mobile SERP vs top 7 on desktop SERP.
  • Shopping tab shows different product-tile density.

For SERP monitoring that's trying to reproduce what a real user sees: rotate both desktop-class (residential) and mobile-class (carrier) exits. The mobile SERP diverges enough that desktop-only measurement misses the mobile-user experience.

The practical recommendation

For US SERP monitoring workflows:

If you're doing local-services SEO:

  • City / ZIP-level residential rotation, per-request, 10+ samples per target query per city.
  • Budget: Coast plan if single-region; Carrier plan if national.

If you're doing retail / shopping SERP:

  • DMA-level residential rotation, 3-5 samples per DMA, rotate top 20 DMAs weighted by target customer distribution.
  • Budget: Coast plan is usually enough.

If you're doing branded SERP tracking:

  • Geographic sampling is less critical. A handful of diverse US exits plus temporal sampling (different times of day, different days of week) gives more signal than DMA-saturated rotation.
  • Budget: Local plan.

If you're doing competitive intelligence:

  • Mix residential and mobile. Desktop + mobile SERPs both matter.
  • Don't over-rotate. Google rate-limits aggressive patterns; rotate slowly (1 QPS per exit) and use per-request rotation within.
  • Budget: Coast or Carrier depending on national scope.

Where this breaks

Two places where our measurement didn't tell us enough:

  1. Signed-in personalization. Google treats signed-in SERP differently; we stripped signed-in state. Signed-in users see more personalized SERPs, and IP geo is one of several signals fed to the personalization pipeline.
  2. Business local-listings freshness. Our measurement window was 3 weeks. Local Pack composition can shift over weeks as businesses update listings, change categories, or get new reviews. Take the 6-of-10-differ number as a snapshot, not a stable metric.

References

  • Search Engine Land: US Google SERP feature tracking Q1 2026
  • Moz local-pack research 2025-26
  • Our internal measurement data (available on request from /contact for customers doing SERP work)

For the state and city-level pool breakouts, see US residential coverage and per-state pages like California or Texas.

Ship on a proxy network you can actually call your ops team about

Real ASNs, real edge capacity, and an engineer who answers your Slack the first time.