Case Study: Removing GTM from Lighthouse Performance Impact

Case Study: Removing GTM from Lighthouse Performance Impact

How we optimized a client's Google Tag Manager implementation to eliminate its performance footprint while maintaining 98%+ data collection accuracy - turning a 50% load impact into near-zero.

Case Study: Removing GTM from Lighthouse Performance Impact

Case Study: Removing GTM from Lighthouse Performance Impact

The Situation: A Well-Optimized Site With One Major Problem

A mid-market ecommerce client came to us with a frustrating problem. Their development team had done everything right:

  • Optimized images with modern formats (WebP, AVIF)
  • Implemented lazy loading throughout
  • Code-split their JavaScript bundles
  • Used edge caching and a modern CDN
  • Scored 95+ on Lighthouse for pages without tracking

But their production pages with full analytics? Lighthouse performance scores dropped to 72-78.

The culprit was obvious in the waterfall: Google Tag Manager.


The Diagnosis: GTM Was Over 50% of Load Impact

We ran a detailed performance audit, comparing pages with and without GTM. The results were stark:

MetricWithout GTMWith GTMGTM Impact
LCP1.2s2.1s+75%
TBT89ms412ms+363%
Performance Score9674-22 points
JS Parse Time180ms890ms+394%

GTM wasn’t just a contributor - it was the dominant factor in their performance degradation. The container held 47 tags, 89 triggers, and 156 variables accumulated over three years of marketing team additions.

Breaking Down the Problem

The performance hit came from multiple sources:

1. Container Size: 847KB uncompressed JavaScript

  • 23 marketing pixels firing on page load
  • Redundant helper scripts loaded by multiple tags
  • Legacy tags for campaigns that ended years ago

2. Execution Timing: All tags fired synchronously on page load

  • No prioritization between critical and non-critical tags
  • Facebook, TikTok, Pinterest, and Google Ads all competing for main thread
  • Custom HTML tags with blocking scripts

3. Variable Overhead: 156 variables, many unused

  • Complex regex-based variables computing on every dataLayer push
  • DOM scraping variables querying the page repeatedly
  • Lookup tables with hundreds of entries

4. Trigger Proliferation: 89 triggers with significant overlap

  • Multiple triggers watching the same elements
  • Page view triggers with complex conditions
  • Scroll tracking firing continuously

The Strategy: Optimize, Defer, Preserve

Our goal wasn’t to remove analytics - the client needed their tracking data. Instead, we developed a three-phase approach:

  1. Optimize: Reduce container weight and execution time
  2. Defer: Load GTM after First Contentful Paint
  3. Preserve: Maintain data accuracy for meaningful user interactions

The Data Quality Trade-off

Here’s the honest conversation we had with the client:

“If we defer GTM until after page load, you’ll miss some data from users who bounce within the first 1-2 seconds. Based on your analytics, that’s approximately 2.3% of sessions. However, 89% of those bouncing users never scroll, never click, and contribute zero revenue. You’re trading minimal analytical value for a 20+ point performance improvement.”

The client agreed: losing data from users who spent less than 1 second on site was an acceptable trade-off.


Phase 1: Container Optimization

Tag Audit and Cleanup

We audited every tag against actual business requirements:

Tag CategoryBeforeAfterRemoved
Analytics (GA4, etc.)422 duplicates
Advertising Pixels23815 unused/redundant
Remarketing1248 legacy campaigns
Custom HTML826 obsolete
Total471631 tags

Variable Consolidation

Many variables performed identical functions or were no longer referenced:

// BEFORE: 6 separate variables doing similar work
// {{Click URL - Full}}
// {{Click URL - Hostname}}
// {{Click URL - Path}}
// {{Click URL - Protocol}}
// {{Click URL - Query}}
// {{Link Click URL Custom}}

// AFTER: 1 built-in variable + 1 custom variable
// {{Click URL}} (built-in)
// {{URL Parts}} (single custom template returning object)

We reduced variables from 156 to 43 - a 72% reduction.

Trigger Optimization

Overlapping triggers were consolidated:

// BEFORE: 4 separate triggers
// - "All Pages - Marketing"
// - "All Pages - Analytics"
// - "All Pages - Remarketing"
// - "Page View - Universal"

// AFTER: 1 trigger with tag sequencing
// - "Page Load - All Tags" (with firing priority)

Trigger count dropped from 89 to 31.

Results After Phase 1

MetricBefore OptimizationAfter Phase 1Improvement
Container Size847KB312KB-63%
Tags4716-66%
Variables15643-72%
Triggers8931-65%
TBT Impact412ms234ms-43%

Good progress, but GTM was still a significant performance factor.


Phase 2: Deferred Loading Implementation

The Technical Approach

Instead of loading GTM in the <head>, we implemented a deferred loading pattern that waits for First Contentful Paint:

<!-- Traditional GTM snippet (removed from <head>) -->

<!-- New: Deferred GTM loading -->
<script>
(function() {
  // Configuration
  var GTM_ID = 'GTM-XXXXXXX';
  var LOAD_DELAY = 0; // Load immediately after FCP, or set delay in ms

  function loadGTM() {
    if (window.gtmLoaded) return;
    window.gtmLoaded = true;

    // Initialize dataLayer with queued events
    window.dataLayer = window.dataLayer || [];

    // Load GTM script
    var script = document.createElement('script');
    script.async = true;
    script.src = 'https://www.googletagmanager.com/gtm.js?id=' + GTM_ID;
    document.head.appendChild(script);
  }

  // Strategy: Load after First Contentful Paint
  if ('PerformanceObserver' in window) {
    var observer = new PerformanceObserver(function(list) {
      list.getEntries().forEach(function(entry) {
        if (entry.name === 'first-contentful-paint') {
          observer.disconnect();
          setTimeout(loadGTM, LOAD_DELAY);
        }
      });
    });
    observer.observe({ entryTypes: ['paint'] });

    // Fallback: load after 3 seconds regardless
    setTimeout(loadGTM, 3000);
  } else {
    // Fallback for older browsers
    window.addEventListener('load', loadGTM);
  }

  // Immediate load on user interaction (captures engaged users)
  ['scroll', 'click', 'touchstart', 'keydown'].forEach(function(event) {
    document.addEventListener(event, loadGTM, { once: true, passive: true });
  });
})();
</script>

Preserving the DataLayer Queue

Critical insight: users interact with the page before GTM loads. We needed to capture those events:

<script>
// Initialize dataLayer BEFORE deferred GTM load
window.dataLayer = window.dataLayer || [];

// Custom wrapper to queue events with timestamps
window.pushData = function(data) {
  data._timestamp = Date.now();
  data._preGTM = !window.gtmLoaded;
  window.dataLayer.push(data);
};
</script>

This ensured that any dataLayer events pushed before GTM loaded would still be processed once the container initialized.

Tag Firing Priority

Within GTM, we restructured tag firing to prioritize essential analytics:

Priority 1 (Immediate): GA4 Page View
Priority 2 (100ms delay): Conversion pixels (for users from paid campaigns)
Priority 3 (200ms delay): Remarketing tags
Priority 4 (Idle callback): Secondary analytics, heatmaps

Using Tag Sequencing and custom timing, we spread the main thread work across multiple frames.


Phase 3: Async Tag Loading

For the remaining tags, we ensured none blocked the main thread:

Custom Tag Template: Async Script Loader

// GTM Custom Template for non-blocking script loading
const injectScript = require('injectScript');
const callInWindow = require('callInWindow');
const setInWindow = require('setInWindow');

const url = data.scriptUrl;
const callback = data.gtmOnSuccess;

// Use requestIdleCallback for non-critical scripts
if (data.loadPriority === 'low') {
  callInWindow('requestIdleCallback', function() {
    injectScript(url, callback, data.gtmOnFailure);
  }, { timeout: 2000 });
} else {
  injectScript(url, callback, data.gtmOnFailure);
}

Results After Full Implementation

MetricOriginalAfter All PhasesTotal Improvement
LCP2.1s1.3s-38%
TBT412ms52ms-87%
Performance Score7494+20 points
JS Parse Time890ms195ms-78%
GTM Load Impact50%+Less than 5%Near-zero

Data Accuracy Analysis

After 30 days of running the new implementation, we compared data collection:

Session Capture Rate

MetricBeforeAfterDelta
Total Sessions847,293832,512-1.7%
Sessions with Transactions12,84712,831-0.1%
Revenue Tracked$2.34M$2.33M-0.4%

The 1.7% session reduction came entirely from users who:

  • Bounced within 1.2 seconds (87% of missed sessions)
  • Never scrolled past the fold (94% of missed sessions)
  • Came from bot traffic subsequently filtered anyway (8% of missed sessions)

Conversion Tracking Accuracy

Because engaged users trigger GTM load via scroll or click, conversion tracking remained essentially intact:

Transactions tracked: 99.9% (vs. payment processor records)
Add-to-cart events: 99.2%
Product views: 97.8%

The slight drop in product views aligned with users who viewed a product but immediately bounced - low analytical value.


Key Takeaways

1. GTM Optimization Before Deferral

Don’t just defer a bloated container. Clean it first:

  • Audit tags quarterly
  • Remove unused variables
  • Consolidate overlapping triggers
  • Question every Custom HTML tag

2. Defer Strategically

Wait for FCP, not arbitrary time delays:

  • Use PerformanceObserver when available
  • Trigger on user interaction as a secondary signal
  • Always include a maximum timeout fallback

3. Accept Minimal Data Loss

Losing 1-2% of sessions from instant bouncers is acceptable when:

  • Those users contribute near-zero revenue
  • Their behavior patterns offer minimal insight
  • The performance improvement benefits the 98% who matter

4. Validate With Real Data

After implementation, compare:

  • Session counts (expect 1-3% reduction)
  • Conversion tracking vs. payment processor
  • Revenue attribution accuracy
  • User behavior metrics (scroll depth, time on site)

Implementation Checklist

For teams looking to replicate this approach:

  • Audit current GTM container (tags, triggers, variables)
  • Identify and remove unused/legacy tags
  • Consolidate redundant variables
  • Merge overlapping triggers
  • Measure baseline performance (LCP, TBT, Score)
  • Implement deferred loading after FCP
  • Add user interaction triggers for early load
  • Configure dataLayer queue preservation
  • Set up tag firing priorities
  • Test in staging environment
  • Deploy with monitoring
  • Compare data accuracy after 7-30 days
  • Adjust timing thresholds if needed

Conclusion

This engagement demonstrated that even well-optimized sites can have their performance destroyed by accumulated tracking debt. By systematically optimizing the GTM container, deferring its load until after critical rendering, and accepting minimal data loss from low-value bounces, we:

  • Improved Lighthouse performance scores by 20 points
  • Reduced Total Blocking Time by 87%
  • Maintained 99.9% conversion tracking accuracy
  • Lost only 1.7% of sessions (from instant bouncers)

The result: the client’s pages now score 94+ on Lighthouse while still collecting the data their marketing team needs. Their Core Web Vitals passed, their SEO improved, and their analytics remained trustworthy.

GTM doesn’t have to be a performance liability - it just needs to be treated like the significant JavaScript payload it is.

// SYS.FOOTER