AI Adoption Index

Methodology

This analysis measures AI adoption rates across ecommerce brands using Gorgias platform data as of March 2026. It covers all active ecommerce merchants subscribed to Gorgias' AI Agent product during that period.

AI adoption is defined by platform-level behavioral signals, not self-reported usage. A brand is counted as having deployed AI only if it has reached activation and achieved at least 5% automation in a 7-day period with at least 5 billed tickets. This threshold filters out merchants who have enabled AI in their settings but have not meaningfully deployed it in live customer interactions. The choice to use behavioral log data rather than self-reported inputs is methodologically deliberate: research published in Nature Human Behaviour found, based on a meta-analysis of 106 effect sizes, that self-reported technology use correlates only moderately with logged measurements, and that self-reports were rarely an accurate reflection of actual logged behavior.1 In the context of AI adoption specifically, where "use" is commonly conflated with experimentation, evaluation, and production deployment, interaction-level logs provide a more reliable signal than survey-based proxies. Behavioral data collected through automated tracking cannot be manipulated by recall failure or social desirability bias, making it a more objective measure of what is actually occurring.2

Brands are segmented by Gross Merchandise Value (GMV) across thirteen tiers, ranging from $50K to $500M. The adoption rate within each tier is calculated as the proportion of brands in that tier meeting the activation threshold described above. GMV is used as the primary segmentation variable because it provides a consistent, quantifiable proxy for operational scale across a heterogeneous merchant population — capturing variation in support ticket volume, staffing capacity, and technology investment that would otherwise require separate measurement.

The underlying data is drawn from interaction-level platform logs. Every signal used to determine a brand's adoption status — whether AI was enabled, whether it handled tickets, whether it crossed the minimum automation threshold — comes from live ticket activity recorded in real time. Because users have imperfect memories and may be influenced by social desirability bias, asking questions about past behavior or future intentions can produce inaccurate results; behavioral data, by contrast, captures what users actually do rather than what they report doing.3 This distinction is particularly important for a study measuring production deployment rather than intent, where the gap between stated and actual behavior would directly distort the findings.

LAB SUBSCRIPTION

Stay ahead of the AI curve

Get proprietary benchmarks and CX models delivered every Tuesday.