

TL;DR:
The way shoppers buy online has shifted and customers are at the center.
They no longer want to scroll through product pages, dig through FAQs, or wait 24 hours for an email reply. They open a conversation, ask a specific question, and expect a useful answer in seconds. Brands that can’t deliver these experiences at scale are seeing customer hesitation turn into abandoned carts and lost revenue.
This shift has a name: conversational commerce. It's the practice of using real-time, two-way conversations as your primary sales channel, through chat, AI agents, messaging apps, and voice.
What started as an experiment for early adopters has become a key growth lever, with 84% of ecommerce brands treating conversational commerce as a strategic pillar this year vs. last year.

We surveyed 400 ecommerce decision-makers across North America, the U.K., and Europe to understand how conversational commerce and AI are reshaping the ecommerce landscape. These findings are complemented by aggregated and anonymized internal Gorgias platform data from 16,000+ ecommerce brands.
The State of Conversational Commerce in 2026 trends report breaks down all of the findings, including five key trends shaping the ecommerce landscape.
{{lead-magnet-1}}
A few years ago, adding an AI chatbot to your site that could provide tracking links and Help Center article recommendations was a differentiator. Today, it's table stakes. McKinsey found that 71% of shoppers expect personalized experiences, and 76% get frustrated when they don't get them.
Right now, most ecommerce professionals use AI, with 93% having used it for at least 1 year. Enthusiasm is accelerating quickly, with only 30% of ecommerce professionals rating their excitement for AI at 10/10 in April 2025. Similarly, while AI adoption rose steadily year over year, it reached a clear peak in 2026.

The use cases driving this adoption are practical and high-volume:

These are the tickets that flood brands’ inboxes every day. AI agents resolve them instantly, without pulling teams away from conversations that actually require human judgment.
Explore AI adoption and use case data in more depth in the full report.
The traditional ecommerce funnel, visit site, browse products, add to cart, check out, is losing ground. Shoppers now discover products on Instagram, ask questions via direct message, and complete purchases without ever visiting a website.

Conversational AI is actively increasing revenue, with 79% of brands reporting that AI-driven interactions have increased sales and conversion in their business.

The practical implication is that every channel is becoming a storefront. Creating personalized touchpoints with customers earlier in the journey, through proactive engagement, is impacting the bottom line.
Read the full report to explore how AI conversions have increased QoQ by industry.
Pre-purchase hesitation is one of the biggest conversion killers in ecommerce. A shopper lands on your product page, has a question about sizing or compatibility, can't find the answer quickly, and leaves. That's a lost sale that had nothing to do with your product.
Conversational AI changes that dynamic. When a shopper can ask a question and get an accurate, personalized answer in real time, the friction disappears.
Brands using Gorgias saw this play out at scale in 2025. When AI Agent recommended a product, 80% of the resulting purchases happened the same day, and 13% happened the next day.

Brands are further accelerating the buying cycle through proactive engagement. On-site features such as suggested product questions, recommendations triggered by search results, and “Ask Anything” input bars drove 50% of conversation-driven purchases during BFCM 2025.
Explore how AI is collapsing the purchase cycle in Trend 3 of the report.
There's a persistent narrative that AI is making CX teams redundant. The data tells a different story. 62% of ecommerce brands are planning to grow their teams, not cut them. But the scope of those teams is changing.

New roles are emerging around AI configuration and quality assurance. Teams are investing in technical members to write AI Guidance instructions, develop tone-of-voice instructions, and continuously QA results.
CX teams are also bridging the gap between support goals and revenue goals, as the two functions increasingly overlap.

The result is CX teams that are more technical than they were before. Agents who once spent their days answering repetitive tickets are now spending that time on higher-value work: complex escalations, VIP customer relationships, and improving the AI systems and knowledge bases that handle the volume.
Learn more about the evolution of CX roles in Trend #4.
Despite increasing AI adoption, data shows that ecommerce brands shouldn’t strive for 100% automation. Winning brands are building systems in which AI handles repetitive tier-1 tickets, and humans handle complex, sensitive cases.

AI handles speed and scale. It resolves order-tracking requests at 2 a.m., processes return-eligibility checks in seconds, and answers the same shipping question for the thousandth time without compromising quality.
Human agents handle conversations that require context, empathy, or decisions that fall outside the standard playbook. There are several topics where shoppers still prefer human support.

Successful hybrid systems require continuous iteration, meaning reviewing handover topics, Guidance, and reviewing AI tickets on a weekly basis.
Discover how leading brands are balancing human and AI systems in Trend #5.
The 2026 trends are about expansion and standardization. The 2030 predictions are about what comes next.

Voice-based purchasing is the biggest bet on the horizon. Only 7% of brands currently use voice assistants for commerce, but 89% expect it to be standard by 2030. The vision is a customer who can reorder a product, check their subscription status, or manage a return entirely over the phone.
Proactive AI is the other major shift. Rather than waiting for a customer to reach out, AI will anticipate needs based on browsing behavior, purchase history, and where someone is in their relationship with your brand. Think of it as the digital equivalent of a sales associate who remembers what you bought last time and knows what you're likely to need next.
Explore where ecommerce brands are allocating their AI budgets in the full report.
The brands winning in 2026 are creating smart, scalable systems where AIhandles volume and humans handle nuance. They’re treating every conversational channel as an opportunity to serve and sell.
The data is clear: AI adoption is accelerating, customer expectations are rising, and the revenue impact of getting this right is measurable.
{{lead-magnet-1}}
TL;DR:
In 2025, chat’s growth outpaced email by 2.5x quarter over quarter. Chat has become our most powerful customer experience tool for how shoppers discover products, ask questions, and decide to buy.
We knew it needed an upgrade, so we reimagined the entire experience from the ground up.
The result is 36% more engagement with product recommendations, nearly 2.25x more shoppers add-to-cart, and 7.3% more customer engagement.
In this post, we'll walk you through our thinking, what’s new in Chat, and how brands are already seeing big gains.
Chat has outpaced email support. Today’s shoppers prefer the speed of quick chat conversations over email. And when shoppers make a new move, we watch, listen, and move with them.
This behavioral shift isn’t happening in isolation. It aligns with the rise of conversational commerce and proves a universal move toward real-time conversations in ecommerce.
In fact, the signals were already there. Two years of building AI Agent showed us just how much design shapes behavior. The interface is the experience, and we knew that pushing chat experiences to closely resemble human interactions would transform how shoppers engage.
Our new and updated chat brings that vision to life. We believe that shopping is moving from static pages to conversations. This new update is built for how people actually want to shop.
The new design turns live chat into an interactive shopping surface made for modern shoppers. We've brought together multiple ways for shoppers to jump into chat, added clickable replies instead of typing, browsable product cards right in the conversation, and quick cart access.
Let's walk through what's new.
Chat now comes in a softer color palette that adapts to your store’s branding. We removed message bubbles in favor of an airy design that brings in the familiarity of speaking to your favorite conversational AI assistant. Every interaction now has the breathing room for deeper conversation and personalization.

It’s now easier for shoppers to get an answer with quick reply buttons and suggested questions in Chat. This replaces the tree-based flows of the previous Chat, removing the need to follow a fixed path. Shoppers can find answers faster without typing text-heavy explanations.

Browsing and buying within Chat is now possible. Previously, it only supported product links that would open in a new page. With the upgrade, you can view item details without leaving the conversation. Shoppers can browse, compare products, and add to cart in one place.

We’re keeping the context by removing the external redirects. The new interface lets shoppers browse product recommendations right in chat. View key product details, images, descriptions, variants, and pricing without opening a new tab.

Chat adds clickable questions on product pages — like “Is this true to size?” or “What’s the difference between shades?” — designed to match what a shopper is likely wondering in the moment. These context-aware prompts help remove buying hesitation before shoppers even think to ask.

Chat adds instant access to shopper actions, like a cart button and an orders button for returning customers. Shoppers can jump straight to their cart or check on an existing order without waiting for an agent to give them a status update.

Every update in Chat drives performance. We didn’t simply give it a makeover, we also fine-tuned its underlying mechanics.
When product suggestions are easy to browse, shoppers interact with them more. The new product cards make shopping feel natural, allowing customers to explore items at their own pace. That convenience led to a 36% increase in engagement with recommended products.
Chat keeps the entire shopping journey inside the conversation, from browsing and asking questions, to adding to cart and checking out. This new layout removes the usual tab-switching between chat and the website. Less friction has led to more than double add-to-cart actions than before the redesign.
Chat's cleaner design and contextual entry points make it easier for shoppers to start a conversation. With suggested questions on product pages and quick reply buttons, more visitors are choosing to engage earlier in their journey. This has resulted in a 7.3% lift in chat engagement.
Conversational commerce has moved from concept to reality. Chat makes it part of the everyday shopping experience, letting shoppers browse, ask questions, compare products, and check out in one interaction. It brings the ease of the in-person shopping experience into the digital world.
We built Chat to redefine the shopping experience. We hope you see it reflected in your customers’ journeys.
Book a demo to see what's possible with the new experience.
The best in CX and ecommerce, right to your inbox

TL;DR:
A year ago, ecommerce brands were still debating whether AI was worth the investment. That debate is over. Today, nearly every ecommerce professional uses AI to do their job.
The shift isn't just about adoption. It's about what AI is used for and how brands measure its impact. Support automation was the entry point. Now, AI is embedded across the full operation, from product recommendations to inventory control to real-time shopping conversations.
In our 2026 State of Conversational Commerce Report, we break down trends on AI usage among 400 ecommerce decision-makers and 16,000+ ecommerce brands using Gorgias.
{{lead-magnet-1}}
If we rewind 12 months ago, the industry was still split on AI. Some ecommerce professionals were excited, but most were still hesitant. In 2024, 69% of ecommerce professionals used AI in their roles. By 2025, that number reached 77%. In 2026, it hit 96%.

The confidence numbers back it up. 71% of brands say they are confident using AI for ecommerce, and 73% are satisfied with its business impact.
In early 2025, only 30% of ecommerce professionals rated their excitement for AI at 10/10. Today, zero percent of respondents describe themselves as hesitant about AI.

Using AI in ecommerce is not new. In fact, it dates back to the 1980s with the invention of algorithms and expert systems. And if you’ve ever leveraged similar product recommendations or chatbots, you’ve already integrated AI into your ecommerce stack.
Modern AI is far more sophisticated.
With the rise of agentic commerce and conversational AI, brands began leveraging AI agents to automate the processing of repetitive support tickets. That’s still happening today, but the scope has expanded beyond the support queue.

Ecommerce brands are deploying AI across every layer of their operation:
When brands were asked which channels contribute most to their AI success, conversational channels dominated. Social media messaging led at 78%, followed by SMS at 70%, and website live chat at 51%. Shoppers want fast, personal conversations, and AI is the best way to deliver that at scale.
Learn more about AI adoption, perception, and use case trends in the full 2026 Conversational Commerce Report.
For decades, customer support success meant fast response times and high satisfaction scores. Those are still important indicators of success, but leading brands are adding revenue-focused metrics to their dashboards.
91% of brands still track CSAT as a measure of AI's impact. But 60% now include AOV as a top indicator, and higher-revenue brands earning $20M+ are focusing on metrics like total operating expenses, cost per resolution, incremental revenue, and one-touch ticket rate.

AI can now start a conversation, ease customer doubts, sell, upsell, and recover abandoned carts in a single conversation. When you’re only measuring CSAT, you’re ignoring the real ROI of conversational AI investment.
Virtual shopping assistants now proactively engage shoppers, adapt to their needs in real time, and offer contextual product recommendations and upsells. When the moment calls for it, they can close the deal with a targeted discount.
Gorgias brands using AI Agent's shopping assistant capabilities nearly doubled their purchase rates and converted 20–50% better than those using AI Agent for support only.
Orthofeet, the largest provider of orthopedic footwear in the US, is a concrete example of this in practice. Using Gorgias, they achieved:
The data tells a clear story: AI has evolved beyond a tool for handling tier 1 support tickets. It’s a core part of your revenue generation strategy.
57% of brands are already using AI for 26–50% of all customer interactions, and 37% expect that share to rise to 51–75% within the next two years. The brands building toward that range now are the ones who will have the operational advantage when it matters most.
The practical question isn't whether to invest in AI. It's where to focus first. Based on where brands are seeing the most impact, three priorities stand out:
Want to go deeper on the full 2026 conversational commerce trends? Read the complete report for data across every major AI use case in ecommerce.
{{lead-magnet-1}}

TL;DR:
Customer education has become a critical factor in converting browsers into buyers. For wellness brands like Cornbread Hemp, where customers need to understand ingredients, dosages, and benefits before making a purchase, education has a direct impact on sales. The challenge is scaling personalized education when support teams are stretched thin, especially during peak sales periods.
Katherine Goodman, Senior Director of Customer Experience, and Stacy Williams, Senior Customer Experience Manager, explain how implementing Gorgias's AI Shopping Assistant transformed their customer education strategy into a conversion powerhouse.
In our second AI in CX episode, we dive into how Cornbread achieved a 30% conversion rate during BFCM, saving their CX team over four days of manual work.
Before diving into tactics, understanding why education matters in the wellness space helps contextualize this approach.
Katherine, Senior Director of Customer Experience at Cornbread Hemp, explains:
"Wellness is a very saturated market right now. Getting to the nitty-gritty and getting to the bottom of what our product actually does for people, making sure they're educated on the differences between products to feel comfortable with what they're putting in their body."
The most common pre-purchase questions Cornbread receives center around three areas: ingredients, dosages, and specific benefits. Customers want to know which product will help with their particular symptoms. They need reassurance that they're making the right choice.
What makes this challenging: These questions require nuanced, personalized responses that consider the customer's specific needs and concerns. Traditionally, this meant every customer had to speak with a human agent, creating a bottleneck that slowed conversions and overwhelmed support teams during peak periods.
Stacy, Senior Customer Experience Manager at Cornbread, identified the game-changing impact of Shopping Assistant:
"It's had a major impact, especially during non-operating hours. Shopping Assistant is able to answer questions when our CX agents aren't available, so it continues the customer order process."
A customer lands on your site at 11 PM, has questions about dosage or ingredients, and instead of abandoning their cart or waiting until morning for a response, they get immediate, accurate answers that move them toward purchase.
The real impact happens in how the tool anticipates customer needs. Cornbread uses suggested product questions that pop up as customers browse product pages. Stacy notes:
"Most of our Shopping Assistant engagement comes from those suggested product features. It almost anticipates what the customer is asking or needing to know."
Actionable takeaway: Don't wait for customers to ask questions. Surface the most common concerns proactively. When you anticipate hesitation and address it immediately, you remove friction from the buying journey.
One of the biggest myths about AI is that implementation is complicated. Stacy explains how Cornbread’s rollout was a straightforward three-step process: audit your knowledge base, flip the switch, then optimize.
"It was literally the flip of a switch and just making sure that our data and information in Gorgias was up to date and accurate."
Here's Cornbread’s three-phase approach:
Actionable takeaway: Block out time for that initial knowledge base audit. Then commit to regular check-ins because your business evolves, and your AI should evolve with it.
Read more: AI in CX Webinar Recap: Turning AI Implementation into Team Alignment
Here's something most brands miss: the way you write your knowledge base articles directly impacts conversion rates.
Before BFCM, Stacy reviewed all of Cornbread's Guidance and rephrased the language to make it easier for AI Agent to understand.
"The language in the Guidance had to be simple, concise, very straightforward so that Shopping Assistant could deliver that information without being confused or getting too complicated," Stacy explains. When your AI can quickly parse and deliver information, customers get faster, more accurate answers. And faster answers mean more conversions.
Katherine adds another crucial element: tone consistency.
"We treat AI as another team member. Making sure that the tone and the language that AI used were very similar to the tone and the language that our human agents use was crucial in creating and maintaining a customer relationship."
As a result, customers often don't realize they're talking to AI. Some even leave reviews saying they loved chatting with "Ally" (Cornbread's AI agent name), not realizing Ally isn't human.
Actionable takeaway: Review your knowledge base with fresh eyes. Can you simplify without losing meaning? Does it sound like your brand? Would a customer be satisfied with this interaction? If not, time for a rewrite.
Read more: How to Write Guidance with the “When, If, Then” Framework
The real test of any CX strategy is how it performs under pressure. For Cornbread, Black Friday Cyber Monday 2025 proved that their conversational commerce strategy wasn't just working, it was thriving.
Over the peak season, Cornbread saw:
Katherine breaks down what made the difference:
"Shopping Assistant popping up, answering those questions with the correct promo information helps customers get from point A to point B before the deal ends."
During high-stakes sales events, customers are in a hurry. They're comparing options, checking out competitors, and making quick decisions. If you can't answer their questions immediately, they're gone. Shopping Assistant kept customers engaged and moving toward purchase, even when human agents were swamped.
Actionable takeaway: Peak periods require a fail-safe CX strategy. The brands that win are the ones that prepare their AI tools in advance.
One of the most transformative impacts of conversational commerce goes beyond conversion rates. What your team can do with their newfound bandwidth matters just as much.
With AI handling straightforward inquiries, Cornbread's CX team has evolved into a strategic problem-solving team. They've expanded into social media support, provided real-time service during a retail pop-up, and have time for the high-value interactions that actually build customer relationships.
Katherine describes phone calls as their highest value touchpoint, where agents can build genuine relationships with customers. “We have an older demographic, especially with CBD. We received a lot of customer calls requesting orders and asking questions. And sometimes we end up just yapping,” Katherine shares. “I was yapping with a customer last week, and we'd been on the call for about 15 minutes. This really helps build those long-term relationships that keep customers coming back."
That's the kind of experience that builds loyalty, and becomes possible only when your team isn't stuck answering repetitive tickets.
Stacy adds that agents now focus on "higher-level tickets or customer issues that they need to resolve. AI handles straightforward things, and our agents now really are more engaged in more complicated, higher-level resolutions."
Actionable takeaway: Stop thinking about AI only as a cost-cutting tool and start seeing it as an impact multiplier. The goal is to free your team to work on conversations that actually move the needle on customer lifetime value.
Cornbread isn't resting on their BFCM success. They're already optimizing for January, traditionally the biggest month for wellness brands as customers commit to New Year's resolutions.
Their focus areas include optimizing their product quiz to provide better data to both AI and human agents, educating customers on realistic expectations with CBD use, and using Shopping Assistant to spotlight new products launching in Q1.
The brands winning at conversational commerce aren't the ones with the biggest budgets or the largest teams. They're the ones who understand that customer education drives conversions, and they've built systems to deliver that education at scale.
Cornbread Hemp's success comes down to three core principles: investing time upfront to train AI properly, maintaining consistent optimization, and treating AI as a team member that deserves the same attention to tone and quality as human agents.
As Katherine puts it:
"The more time that you put into training and optimizing AI, the less time you're going to have to babysit it later. Then, it's actually going to give your customers that really amazing experience."
Watch the replay of the whole conversation with Katherine and Stacy to learn how Gorgias’s Shopping Assistant helps them turn browsers into buyers.
{{lead-magnet-1}}

TL;DR:
Your AI sounds like a robot, and your customers can tell.
Sure, the answer is right, but something feels off. The tone of voice is stiff. The phrases are predictable and generic. At most, it sounds copy-pasted. This may not be a big deal from your side of support. In reality, it’s costing you more than you think.
Recent data shows that 45% of U.S. adults find customer service chatbots unfavorable, up from 43% in 2022. As awareness of chatbots has increased, so have negative opinions of them. Only 19% of people say chatbots are helpful or beneficial in addressing their queries. The gap isn't just about capability. It's about trust. When AI sounds impersonal, customers disengage or leave frustrated.
Luckily, you don't need to choose between automation and the human touch.
In this guide, we'll show you six practical ways to train your AI to sound natural, build trust, and deliver the kind of support your customers actually like.
The fastest way to make your AI sound more human is to teach it to sound like you. AI is only as good as the input you give it, so the more detailed your brand voice training, the more natural and on-brand your responses will be.
Start by building a brand voice guide. It doesn't need to be complicated, but it should clearly define how your brand communicates with customers. At minimum, include:
Think of your AI as a character. Samantha Gagliardi, Associate Director of Customer Experience at Rhoback, described their approach as building an AI persona:
"I kind of treat it like breaking down an actor. I used to sing and perform for a living — how would I break down the character of Rhoback? How does Rhoback speak? What age are they? What makes the most sense?"
✅ Create a brand voice guide with tone, style, formality, and example phrases.
Humans associate short pauses with thinking, so when your AI responds too quickly, it instantly feels unnatural.
Adding small delays helps your AI feel more like a real teammate.
Where to add response delays:
Even a one- to two-second pause can make a big difference in a robotic or human-sounding AI.
✅ Add instructions in your AI’s knowledge base to include short response delays during key moments.
Generic phrases make your AI sound like... well, AI. Customers can spot a copy-pasted response immediately — especially when it's overly formal.
That doesn't mean you need to be extremely casual. It means being true to your brand. Whether your voice is professional or conversational, the goal is the same: sound like a real person on your team.
Here's how to replace robotic phrasing with more brand-aligned responses:
|
Generic Phrase |
More Natural Alternative |
|---|---|
|
“We apologize for the inconvenience.” |
“Sorry about that, we’re working on it now.” (friendly) |
|
“Your satisfaction is our top priority.” |
“We want to make sure this works for you.” (friendly) |
|
“Please be advised…” |
“Just a quick heads up…” (friendly) |
|
“Your request has been received.” |
“Got it. Thanks for reaching out.” (friendly) |
|
“I will now review your request.” |
“Let me take a quick look.” (friendly) |
✅ Identify your five most common inquiries and give your AI a rewritten example response for each.
One of the biggest tells that a response is AI-generated? It ignores what's already happened.
When your AI doesn't reference order history or past conversations, customers are forced to repeat themselves. Repetition can lead to frustration and can quickly turn a good customer experience into a bad one.
Great AI uses context to craft replies that feel personalized and genuinely helpful.
Here's what good context looks like in AI responses:
Tools like Gorgias AI Agent automatically pull in customer and order data, so replies feel human and contextual without sacrificing speed.
✅ Add instructions that prompt your AI to reference order details and/or past conversations in its replies, so customers feel acknowledged.
Customers just want help. They don't care whether it comes from a human or AI, as long as it's the right help. But if you try to trick them, it backfires fast. AI that pretend to be human often give customers the runaround, especially when the issue is complex or emotional.
A better approach is to be transparent. Solve what you can, and hand off anything else to an agent as needed.
When to disclose that the customer is talking to AI:
For more on this topic, check out our article: Should You Tell Customers They're Talking to AI?
✅ Set clear rules for when your AI should escalate to a human and include handoff messaging that sets expectations and preserves context.
We're giving you permission to break the rules a little bit. The most human-sounding AI doesn't follow perfect grammar or structure. It reflects the messiness of real dialogue.
People don't speak in flawless sentences every time. We pause, rephrase, cut ourselves off, and throw in the occasional emoji or "uh." When AI has an unpredictable cadence, it feels more relatable and, in turn, more human.
What an imperfect AI could look like:
These imperfections give your AI a more believable voice.
✅ Add instructions for your AI that permit variation in grammar, tone, and sentence structure to mimic real human speech.
Human-sounding AI doesn’t require complex prompts or endless fine-tuning. With the right voice guidelines, small tone adjustments, and a few smart instructions, your AI can sound like a real part of your team.
Book a demo of Gorgias AI Agent and see for yourself.
{{lead-magnet-2}}

Each month, our product team holds a casual, conversational event with our customers to demo new features, receive real-time feedback, and answer live Q&As.
Watch the video recap here, or read on for a recap of the latest releases.
With this new channel, you can receive and respond to SMS and MMS messages within Gorgias. This makes it easy for your customers to communicate with your store while they’re on the go, and easy for your agents to provide fast, conversational support.

We’re releasing SMS this quarter as a free trial for every customer on every plan. Conversations will count toward your plan’s ticket count, but there are no additional charges for minutes, usage, phone numbers, etc. In the coming months, we’ll be assessing the best way to provide Voice and SMS so we can continue to innovate and build powerful new features for these channels.
If you want customers to consent to receive SMS messages before your agents actually reply, you can do this with a simple Rule in Gorgias. Here’s what it would look like:

Read this article for four more Gorgias Rules to help automate SMS.

This is especially great for anyone who gets tickets assigned to them, but may not be looking at Gorgias throughout their entire workday. (Think managers, social media collaborators, etc.)
To see these notifications, you may need to adjust your browser and/or computer settings. You can see an example for Chrome + Mac in our official Product Update.
Quick response flows bring in a critical component to self-service, creating more ways to engage with shoppers who visit your store online. We designed quick response flows with the guidance that 60% of the time, customers use chat to ask pre-purchase questions. Most successful merchants leverage their FAQ content to prompt conversation with quick response flows that result in generating revenue, trust and loyalty.
If you haven’t yet activated quick response flows, you’re in for a treat. With this revamp, you can now easily manipulate every step of the experience for quick response flows from self-service settings. Immediately under the Quick Response Flows tab, you can write in any question and answer you prefer and hit save. There is no other place or screen you’d need to navigate. Using the preview on the right, you can reassure the quality of the experience you want to create for your customers.

If customers click on a quick response flow and find the information they need, this will not count towards your monthly ticket volume.
If they click on a quick response flow and select “No, I need more help” option, it will create a ticket for an agent to address.
It’s amazing when our merchants start using a feature and take it to the next level. We’ve seen some of the best practices to include creating unique tags for each quick response flow created (e.g. Quick_Response_Flow_1), then adding a corresponding view in Tickets. This way, you can track closely the conversations prompted by quick response flows and dedicate a select group of agents who are trained to expand on the subject and help your customers become fans. For more on this subject, check out Quick Response Flows help doc here.
Tune into that timestamp if you want the full 25 minutes of customer-led questions and answers from our product team. Here were a few of the highlights!
Gorgias phone is an easy way to add a basic phone line to your store. If you’re looking for advanced, full call center features, our partners like Aircall or RingCentral may be a better solution for you.
For example, their phone-specific statistics are more in-depth than ours, but the ability to create a phone number and answer it in the Gorgias helpdesk is naturally easier with Gorgias.
Our long-term vision for Gorgias Phone is not to fully compete with apps like Aircall, but rather to invest in ecommerce-specific solutions so you can provide the best voice support to your shoppers.
It’s our next new channel, coming Q3! We have access to the API and are ready to start building at the end of the quarter. (Just need to polish up a few existing channel bugs first.)
Not yet, but we’d love to hear more feedback about this if it’s something you’re interested in! Submit this idea on our Product Roadmap to help us prioritize it.
That completes our recap of our May customer product event. We hold these events once as a month as a way to review the latest releases and connect with our customers in real-time. It’s a favorite – from both customers, and the Gorgias team.
If you’d like to sign up for the next one to attend live, you can register here. We’d love to have you join us!

Wondering if your team should add voice support to your ecommerce channels this year? You’re not alone.
Over 15% of our customers currently have a phone integration added to their account, thanks to the Gorgias Voice integration and partners like Aircall and RingCentral.
While voice support may feel like an “outdated” channel in the age of live chat and social media, this tells us that ecommerce support teams are increasingly finding value in offering it to their clients.
Here are 4 benefits of adding voice support to your ecommerce store:
Phones are an immediate communication channel, so it’s not surprising that adding voice support can boost your first response time. What we weren’t expecting, however, was by how much:
Our customers with phones have a first response time that’s 7x faster than merchants that don’t offer voice support. (30 minutes compared to 4 hours.)
What’s even more important to note, however, is that adding voice support doesn’t decrease resolution time (like many support managers fear). In fact, it makes quite a positive impact:
Our merchants using phones have an average resolution time that’s 34% faster than customers who don’t.
So not only does this channel help you respond to customers faster, but it helps you resolve their issues faster. That means your team can work more efficiently and spend up to 66% less time resolving each ticket. (Imagine how that could help increase your store’s revenue!)
Talking (literally) to shoppers and hearing their tone of voice is the best way your agents can adjust their responses to create a great customer experience.
While you can do your best to read clues in email and chat, it’s always going to be easier to match the customer’s tone when actually listening to them on the phone.
And when your agents can express empathy and solve the problem accordingly, you’ve got a better chance at getting that 5-star review and positive customer feedback.
Our customers using phones have an average Satisfaction score of 4.56 out of 5.
While that score also depends a lot on your support agents and their personal approach to customer service, there’s no denying that actually speaking to clients is helpful for both parties in those moments.
Especially if you sell high-end products or have VIP customers (like wholesalers buying in bulk), having a phone number adds a level of legitimacy to your business.
Since most online stores don’t immediately add phones as a support channel, it will stand out to customers when your shop does offer voice support.
Phones add a sense of maturity to your business (and especially if you’re using an integrated solution like Gorgias Voice), there’s not much cost involved to elevate the status of your store like this.
While the internet has come a long way over the years in terms of accessibility, the truth remains that phone support may be an easier and more comfortable contact method for some of your customers than digital channels.
Test your live chat experience with a screen reader, for example. What’s the experience like? (And how does it compare to dialing a phone number and talking verbally to someone?)
If there’s a chance that voice support is more approachable for a part of your customer demographic, you’ll create a better shopping experience for them by adding a phone line.
The first thing you’ll need to decide is who on your team will actually be answering the phones.
A few options to explore:
Next, you’ll need to choose a phone platform.
If you’re adding our built-in voice channel to your Gorgias helpdesk, all you have to do to get started is log into your Gorgias helpdesk and create a new number (or forward or port an existing one, if you happen to have one already).

Our phone integration is included in all Gorgias plans, and unlike other providers, there’s no annual contract fee and no minimum seat requirement.
This makes it a great option for teams looking to add phones for the first time or who want to manage all communication channels in one place.
Plus, our ecommerce integrations save your agents time by displaying callers’ shopping history right in the helpdesk, so they don’t have to go searching for the last order, for example.

For more tips on how to create efficient phone processes and increase resolution time by 34%, check out this article.
Finally, once you’ve set up your team and chosen your provider, all that’s left to do is make your number visible.
If you’re offering voice support for all your customers, you might place it in the footer of your website or all transactional emails.
If you’re piloting voice support or using it exclusively for a segment of shoppers, you might save it for smaller email segments or place it only on dedicated landing pages just for them.
Wherever you decide to put your number, just make sure it's easily accessible and clearly visible so your shoppers can start calling, and your support team can start delivering even better customer experiences!

SMS is a convenient way for customers to contact your brand and receive fast support. It’s no wonder it’s one of the top five channels that consumers expect to engage with brands, alongside email, voice, website, and in-person.
Every Gorgias plan now includes two-way SMS at no additional cost, making it easy for your brand to start offering this conversational channel.
There are many reasons to offer customer service messaging, but here are the top four:
SMS is a conversational, real-time channel. The benefit of this is that customers tend to keep the conversation short and reply quickly to follow-up questions, meaning your agents can resolve the situation quickly, too.
Most people keep their phone with them everywhere they go. With SMS, it’s easy for customers to start the conversation and follow-up as they move throughout their day, instead of feeling stuck to a chat conversation on their laptop.
Sending text messages feels like you’re texting a friend, even if it’s actually between customers and your brand. Younger clientele will feel natural using this support channel, and it can even help you build that friendly-feeling into your brand perception.
Does your refund or return policy require photo evidence to kick off the process? If your customers ever need to send pictures of damaged items or wrong products, SMS is the perfect channel because they’re probably taking those photos on their phone anyway.
Still not sure if SMS is a support channel your brand should prioritize? Try it for 2 weeks. Because SMS is included in every Gorgias plan, it’s easy to turn off if you decide it isn’t right.
Recommended reading: Our list of 60+ fascinating customer service statistics.
You’ll need two things to get started with Gorgias SMS. (Don’t worry, they’re both quick!)
If you’re new here, get started on the Gorgias helpdesk. It only takes a few minutes to create an account, and you can always book a call with our sales team if you have questions.

The second is a Gorgias-owned phone number, meaning you either created it in Gorgias or ported it from your previous phone provider. You can do both of these actions in Settings > Phone Numbers.
Note: SMS is currently only available for US, UK, and Canadian numbers.
Once your phone number is ready in Gorgias, you can add the SMS integration to it. You can do this from Settings > Integrations > SMS.
Once the integration is active, you’re ready to start replying to SMS conversations from your customers.

To tell your customers they can now text your brand, we recommend adding “Text us,” plus your phone number, in some or all of these places:
Below are four top automation rules to take full advantage of SMS customer service. We also have a full guide on customer service messaging that includes templates and macros to upgrade your SMS support.
SMS is an official channel in Gorgias, meaning you can see SMS-specific stats or create SMS-specific Views out of the box. There may be times when you also want to Tag tickets with “SMS” however, in which case you can do so with a Rule like this:

SMS is a fast, conversational channel, so you’ll want to assign these tickets to agents that can keep up with the pace. If you have a dedicated chat team, they’ll be naturals at answering questions via SMS, as well. Here’s a Rule that will automatically assign SMS tickets to a specific team.

When customers text your brand, they’ll expect a fast response. In order to buy your agents some time, we recommend sending an auto-response to let the customer know their message has been received and an agent will be with them shortly. This will also give them confidence that the text message did in fact go through, so they don’t follow-up right away.

Whenever you add a new communication channel for your customers, you should consider how you’ll respond to WISMO (“Where is my order?”) questions on it. With SMS, you’ll want to keep the length of your reply in mind so you’re not sending an insanely long text message back to customers. We recommend creating a Rule that can A) make sure the reply follows the best format for SMS and B) save your agents from having to answer these WISMO questions manually.

Gorgias SMS empowers your brand to keep the conversation going on SMS, even when your customers are on the go.
We also integrate with SMS marketing apps, making it easier for agents to answer promotion replies from one workspace. They can work more efficiently while turning SMS questions into opportunities for better customer value.
In the Gorgias App Store, you’ll find some of the top ecommerce integration partners like Klaviyo, Attentive, Postscript, and more.
If your brand is using any of these apps to drive sales via SMS, we highly recommend integrating with Gorgias so your team can work more efficiently toward your revenue goals. When SMS marketing and SMS customer service work in tandem, they are far more powerful.
Want to see an example of a brand that successfully launched SMS customer support and effectively drove customers to use the new channel? Check out our playbook of Berkey Filters, an ecommerce merchant that did just that.
Ready to get started with this conversational support channel? Add SMS to your Gorgias helpdesk today or book a call with our team to learn more.

As we all locked down in March 2020 and changed our shopping habits, many brick-and-mortar retailers started their first online storefronts.
Gorgias has benefitted from the resulting ecommerce growth over the past two years, and we have grown the team to accommodate these trends. From 30 employees at the start of 2020, we are now more than 200 on our journey to delivering better customer service.
Our engineering team contributed to much of this hiring, which created some challenges and growing pains. What worked at the beginning with our team of three did not hold up when the team grew to 20 people. And the systems that scaled the team to 20 needed updates to support a team of 50. To continue to grow, we needed to build something more sustainable.
Continuous deployment — and the changes required to support it — presented a major opportunity for reaching toward the scale we aspired to. In this article I’ll explore how we automated and streamlined our process to make our developers’ lives easier and empower faster iteration.
Throughout the last two years of accelerated growth, we’ve identified a few things that we could do to better support our team expansion.
Before optimizing the feature release process, here’s how things went for our earlier, smaller team when deploying new additions:
This wasn’t perfect, but it was an effective solution for a small team. However, the accelerated growth in the engineering team led to a sharp increase in the number of projects and also collaborators on each project. We began to notice several points of friction:
It was clear that things needed to change.
On the Site Reliability Engineering (SRE) team, we are fans of the GitOps approach, where Git is the single source of truth. So when the previously mentioned points of friction became more critical, we felt that all the tooling involved in GitOps practices could help us find practical solutions.
Additionally, these solutions would often rely on tooling we already had in place (like Kubernetes, or Helm for example).
GitOps is an operational framework. It takes application-development best practices and applies them to infrastructure automation.
The main takeaway is that in a GitOps setting, everything from code to infrastructure configuration is versioned in Git. It is then possible to create automation by leveraging the workflows associated with Git.
One such class of that automation could be “operations by pull requests”. In that case, pull requests and associated events could trigger various operations.
Here are some examples:
ArgoCD is a continuous deployment tool that relies on GitOps practices. It helps synchronize live environments and services to version-controlled declarative service definitions and configurations, which ArgoCD calls Applications.
In simpler terms, an Application resource tells ArgoCD to look at a Git repository and to make sure the deployed service’s configuration matches the one stored in Git.
The goal wasn’t to reinvent the wheel when implementing continuous deployment. We instead wanted to approach it in a progressive manner. This would help build developer buy-in, lay the groundwork for a smoother transition, and reduce the risk of breaking deploys. ArgoCD was an excellent step toward those goals, given how flexible it is with customizable Config Management Plugins (CMP).
ArgoCD can track a branch to keep everything up to date with the last commit, but can also make sure a particular revision is used. We decided to use the latter approach as an intermediate step, because we weren’t quite ready to deploy off the HEAD of our repositories.
The only difference from a pipeline perspective is that it now updates the tracked revision in ArgoCD instead of running our complex deployment scripts. ArgoCD has a Command Line Interface (CLI) that allows us to simply do that. Our deployment jobs only need to run the following command:
The developers’ workflow is left untouched at this point. Now comes the fun part.
Our biggest requirement for continuous deployment was to have some sort of safeguard in case things went wrong. No matter how much we trust our tests, it is always possible that a bug makes its way to our production environments.
Before implementing Argo Rollouts, we still kept an eye on the system to make sure everything was fine during deployment and took quick action when issues were discovered. But up to that point, this process was carried out manually.
It was time to automate that process, toward the goal of raising our team’s confidence levels when deploying new changes. By providing a safety net, of sorts, we could be sure that things would go according to plan without manually checking it all.
Argo Rollouts is a progressive delivery controller. It relies on a Kubernetes controller and set of custom resource definitions (CRD) to provide us with advanced deployment capabilities on top of the ones natively offered by Kubernetes. These include features like:

We were especially interested in the canary and canary analysis features. By shifting only a small portion of traffic to the new version of an application, we can limit the blast radius in case anything is wrong. Performing an analysis allows us to automatically, and periodically, check that our service’s new version is behaving as expected before promoting this canary.
Argo Rollouts is compatible with multiple metric providers including Datadog, which is the tool we use. This allows us to run a Datadog query (or multiple) every few minutes and compare the results with a threshold value we specify.
We can then configure Argo Rollouts to automatically take action, should the threshold(s) be exceeded too often during the analysis. In those cases, Argo Rollouts scales down the canary and scales the previous stable version of our software back to its initial number of replicas.

Each service has its own metrics to monitor, but for starters we added an error rate check for all of our services.
Remember when I mentioned replacing complex, project-specific deployment scripts with a single, simple command? That’s not entirely accurate, and requires some additional nuance for a full understanding.
Not only did we need to deploy software on different kinds of environments (staging and production), but also in multiple Kubernetes clusters per environment. For example, the applications composing the Gorgias core platform are deployed across multiple cloud regions all around the world.
ArgoCD and Argo Rollouts might seem to be magic tools, we actually still need some “glue” to make things stick together. Now because of ArgoCD’s application-based mechanisms, we were able to get rid of custom scripts and use this common tool across all projects. This in-house tool was named deployment conductor.
We even went a step further and implemented this tool in a way that accepts simple YAML configuration files. Such files allow us to declare various environments and clusters in which we want each individual project to be deployed.
When deploying a service to an environment, our tool will then go through all clusters listed for that environment.
For each of these, it will look for dedicated values.yaml files in the service’s chart’s directory. This allows developers to change a service’s configuration based on the environment and cluster in which it’s deployed. Typically, they would want to edit the number of replicas for each service depending on the geographical region.
This makes it much easier for developers than having to manage configuration and maintain deployment scripts.
This leads us to the end of our journey’s first leg: our first encounter with continuous deployment.
After we migrated all our Kubernetes Deployments to Argo Rollouts, we let our developers get acclimated for the next few weeks.
Our new setup still wasn’t fully optimized, but we felt like it was a big improvement compared to the previous one. And while we could think of many improvements to make things even more reliable before enabling continuous deployment, we decided to get feedback from the team during this period, to iterate more effectively.
Some projects introduced additional technicalities to overcome, but we easily identified a small first batch of projects where we could enable CD. Before deployment, we asked the development team if we were missing anything they needed to be comfortable with automatic deployment of their code in production environments.
With everyone feeling good about where we were at, we removed the manual step in our CI system (GitLab) for jobs deploying to production environments.
We’re still monitoring this closely, but so far we haven’t had any issues. We still plan on enabling continuous deployment on all our projects in the near future, but it will be a work in progress for now.
Here are some ideas for future improvements that anticipate potential roadblocks:
We’re excited to explore these challenges. And, overall, our developers have welcomed these changes with open arms. It helps that our systems have been successful at stopping bad deployments from creating big incidents so far.
While we haven’t reached the end of our journey yet, we are confident that we are on the right path, moving at the right pace for our team.

As you work with SQLAlchemy, over time, you might have a performance nightmare brewing in the background that you aren’t even aware of.
In this lesser-known issue, which strikes primarily in larger projects, normal usage leads to an ever-growing number of idle-in-transaction database connections. These open connections can kill the overall performance of the application.
While you can fix this issue down the line, when it begins to take a toll on your performance, it takes much less work to mitigate the problem from the start.
At Gorgias, we learned this lesson the hard way. After testing different approaches, we solved the problem by extending the high-level SQLAlchemy classes (namely sessions and transactions) with functionality that allows working with "live" DB (database) objects for limited periods of time, expunging them after they are no longer needed.
This analysis covers everything you need to know to close those unnecessary open DB connections and keep your application humming along.
Leading Python web frameworks such as Django come with an integrated ORM (object-relational mapping) that handles all database access, separating most of the low-level database concerns from the actual user code. The developer can write their code focusing on the actual logic around models, rather than thinking of the DB engine, transaction management or isolation level.
While this scenario seems enticing, big frameworks like Django may not always be suitable for our projects. What happens if we want to build our own starting from a microframework (instead of a full-stack framework) and augment it only with the components that we need?
In Python, the extra packages we would use to build ourselves a full-fledged framework are fairly standard: They will most likely include Jinja2 for template rendering, Marshmallow for dealing with schemas and SQLAlchemy as ORM.
Not all projects are web applications (following a request-response pattern) and among web applications, most of them deal with background tasks that have nothing to do with requests or responses.
This is important to understand because in request-response paradigms, we usually open a DB transaction upon receiving a request and we close it when responding to it. This allows us to associate the number of concurrent DB transactions with the number of parallel HTTP requests handled. A transaction stays open for as long as a request is being processed, and that must happen relatively quickly — users don't appreciate long loading times.
Transactions opened and closed by background tasks are a totally different story: There's no clear and simple rule on how DB transactions are managed at a code level, there's no easy way to tell how long tasks (should) last, and there usually isn't any upper limit to the execution time.
This could lead to potentially long transaction times, during which the process effectively holds a DB connection open without actually using it for the majority of the time period. This state is known as an idle-in-transaction connection state and should be avoided as much as possible, because it blocks DB resources without actively using them.
To fully understand how database access transpires in a SQLAlchemy-based app, one needs to understand the layers responsible for the execution.

At the highest level, we code our DB interaction using high-level SQLAlchemy queries on our defined models. The query is then transformed into one or more SQL statements by SQLAlchemy's ORM which is passed on to a database engine (driver) through a common Python DB API defined by PEP-249. (PEP-249 is a Python Enhancement Proposal dedicated to standardizing Python DB server access.) The database engine communicates with the actual database server.
At first glance, everything looks good in this stack. However there's one tiny problem: The DB API (defined by PEP-249) does not provide an explicit way of managing transactions. In fact, it mandates the use of a default transaction regardless of the operations you're executing, so even the simplest select will open a transaction if none are open on the current connection.
SQLAlchemy builds on top of PEP-249, doing its best to stay out of driver implementation details. That way, any Python DB driver claiming PEP-249 compatibility could work well with it.
While this is generally a good idea, SQLAlchemy has no choice but to inherit the limitations and design choices made at the PEP-249 level. More precisely (and importantly), it will automatically open a transaction for you upon the very first query, regardless whether it’s needed. And that's the root of the issue we set out to solve: In production, you'll probably end up with a lot of unwanted transactions, locking up on DB resources for longer than desired.
Also, SQLAlchemy uses sessions (in-memory caches of models) that rely on transactions. And the whole SQLAlchemy world is built around sessions. While you could technically ditch them to avoid the idle-in-transactions problem with a “lower-level” interface to the DB, all of the examples and documentation you’ll find online uses the “higher-level” interface (i.e. sessions). It’s likely that you will feel like you are trying to swim against the tide to get that workaround up and running.
Some DB servers, most notably Postgres, default to an autocommit mode. This mode implies atomicity at the SQL statement level — something developers are likely to expect. But they prefer to explicitly open a transaction block when needed and operate outside of one by default.
If you're reading this, you have probably already Googled for "sqlalchemy autocommit" and may have found their official documentation on the (now deprecated) autocommit mode. Unfortunately this functionality is a "soft" autocommit and is implemented purely in SQLAlchemy, on top of the PEP-249 driver; it doesn't have anything to do with DB's native autocommit mode.
This version works by simply committing the opened transaction as soon as SQLAlchemy detects an SQL statement that modifies data. Unfortunately, that doesn't fix our problem; the pointless, underlying DB transaction opened by non-modifying queries still remains open.
When using Postgres, we could in theory play with the new AUTOCOMMIT isolation level option introduced in psycopg2 to make use of the DB-level autocommit mode. However this is far from ideal as it would require hooking into SQLAlchemy's transaction management and adjusting the isolation level each time as needed. Additionally, "autocommit" isn't really an isolation level and it’s not desirable to change the connection's isolation level all the time, from various parts of the code. You can find more details on this matter, along with a possible implementation of this idea in Carl Meyer's article “PostgreSQL Transactions and SQLAlchemy.”
At Gorgias, we always prefer explicit solutions to implicit assumptions. By including all details, even common ones that most developers would assume by default, we can be more clear and leave less guesswork later on. This is why we didn't want to hack together a solution behind the scenes, just to get rid of our idle-in-transactions problem. We decided to dig deeper and come up with a proper, explicit, and (almost) hack-free method to fix it.
The following chart shows the profile of an idle-in-transaction case over a period of two weeks, before and after fixing the problem.

As you can see, we’re talking about tens of seconds during which connections are being held in an unusable state. In the context of a user waiting for a page to load, that is an excruciatingly long period of time.
SQLAlchemy works with sessions that are, simply put, in-memory caches of model instances. The code behind these sessions is quite complex, but usage boils down to either explicit session reference...
...or implicit usage.
Both of these approaches will ensure a transaction is opened and will not close it until a later ***session.commit()***or session.rollback(). There's actually nothing wrong with calling session.commit() when you need to explicitly close a transaction that you know is opened and you’re done with using the DB, in that particular scope.
To address the idle-in-transaction problem generated by such a line, we must keep the code between the query and the commit relatively short and fast (i.e. avoid blocking calls or CPU-intensive operations).
It sounds simple enough, but what happens if we access an attribute of a DB model after session.commit()? It will open another transaction and leave it hanging, even though it might not need to hit the DB at all.
While we can't foresee what a developer will do with the DB object afterward, we can prevent usage that would hit the DB (and open a new transaction) by expunging it from the session. An expunged object will raise an exception if any unloaded (or expired) attributes are accessed. And that’s what we actually want here: to make it crash if misused, rather than leaving idle-in-transaction connections behind to block DB resources.
When working with multiple objects and complex queries, it’s easy to overlook the necessary expunging of those objects. It only takes one un-expunged object to trigger the idle-in-transaction problem, so you need to be consistent.
Objects can't be used for any kind of DB interaction after being expunged. So how do we make it clear and obvious that certain objects are to be used in within a limited scope? The answer is a Python context manager to handle SQLAlchemy transactions and connections. Not only does it allow us to visually limit object usage to a block, but it will also ensure everything is prepared for us and cleaned up afterwards.
The construct above normally opens a transaction block associated to a new SQLAlchemy session, but we've added a new expunge keyword to the begin method, instructing SQLAlchemy to automatically expunge objects associated with block's session (the tx.session). To get this kind of behavior from a session, we need to override the begin method (and friends) in a subclass of SQLAlchemy's Session.
We want to keep the default behavior and use a new ExpungingTransaction instead of SQLAlchemy's SessionTransaction, but only when explicitly instructed to by the expunge=True argument.
You can use the class_ argument of sessionmaker to instruct it to build am ExpungingSession instead of a regular Session.
The last piece of the puzzle is the ExpungingTransaction code, which is responsible for two important things: committing the session so the underlying transaction gets closed and expunging objects so that we don't accidentally reopen the transaction.
By following these steps, you get a useful context manager that forces you to group your DB interaction into a block and notifies you if you mistakenly use (unloaded) objects outside of it.
What if we really need to access DB models outside of an expunging context?
Simply passing models to functions as arguments helps in achieving a great goal: the decoupling of models retrieval from their actual usage. However, such functions are no longer in control of what happens to those models afterwards
We don't want to forbid all usage of models outside of this context, but we need to somehow inform the user that the model object comes “as is,” with whatever loaded attributes it has. It's disconnected from the DB and shouldn't be modified.
In SQLAlchemy, when we modify a live model object, we expect the change to be pushed to the DB as soon as commit or flush is called on the owning session. With expunged objects this is not the case, because they don't belong to a session. So how does the user of such an object know what to expect from a certain model object? The user needs to ensure that she:
To safely and explicitly pass along these kind of model objects, we introduced frozen objects. Frozen objects are basically proxies to expunged models that won't allow any modification.
To work with these frozen objects, we added a freeze method to our ExpungingSession:
So now our code would look something like this:
Now, what if we want to modify the object outside of this context, later on, (e.g. after a long-lasting HTTP request)? As our frozen object is completely disconnected from any session (and from the DB), we need to fetch a warm instance associated to it from the DB and make our changes to that instance. This is done by adding a helper fetch_warm_instance method to our session...
...and then our code that modifies the object would say something like this.
When the second context manager exits, it will call commit on tx.session, and changes to my_model will be committed to the DB right away.
We now have a way of safely dealing with models without generating idle-in-transaction problems, but the code quickly becomes a mess if we have to deal with relationships: We need to freeze them separately and pass them along as if they aren’t related. This could be overcome by telling the freeze method to freeze all related objects, recursively walking the relationships.
We'll have to make some adjustments to our frozen proxy class as well.
Now, we can fetch, freeze, and use frozen objects with any preloaded relationships.
While the code to access the DB with SQLAlchemy may look simple and straightforward, one should always pay close attention to transaction management and the subtleties that arise from the various layers of the persistence stack.
We learned this the hard way, when our services eventually started to exhaust the DB resources many years into development.
If you recently decided to use a software stack similar to ours, you should consider writing your DB access code in such a way that it avoids idle-in-transaction issues, even from the first days of your project. The problem may not be obvious at the beginning, but it becomes painfully apparent as you scale.
If your project is mature and has been in development for years, you should consider planning changes to your code to avoid or to minimize idle-in-transaction issues, while the situation is still under control. You can start writing new idle-in-transaction-proof code while planning to gradually update existing code, according to the capacity of your development team.

Like any major topic in your company, your compensation policy should reflect your organizational values.
At Gorgias, we created a compensation calculator that reflected ours, setting salaries across the organization based on 3 key principles:
Since the beginning, we applied the first two: Each of our employees was granted data-driven stock options that beat the market average.
However, we were challenged internally: Our team members asked how much they would make if they switched teams or if they got promoted.
This led to the implementation of our third key principle, as we shared the compensation calculator with everyone at Gorgias and beyond: See the calculator here.
This was not a small challenge. We’re sharing our process in hopes that we can help other companies arrive at equitable, transparent compensation practices.
First, let’s get back to how we built the tool. We had to decide which criteria we wanted to take into account. Based on research articles and benchmarks on what other companies did before, we decided that our compensation model would be based on 4 factors: position, level, location, and strategic orientation.
If we had to sum it up all briefly, our formula looks like this:
Average of Data (for the position at defined percentile & Level) x Location index

This is the job title someone has in the company. It looks simple, but it can be challenging to define! Even if the titles don’t really vary from one company to another, people might have different duties, deal with much bigger clients or have more technical responsibilities. Sometimes your job title or position doesn’t match the existing databases.
For some of these roles, when we thought that our team members were doing more than average in the market, we crossed some databases to get something closer to fairness.
To assess a level we defined specific criteria in our growth plan for each job position. It is, of course, linked to seniority, but that is not the primary factor. When we hire someone, we evaluate their skills using specific challenges and case studies during our interview processes.
Depending on the databases you’ll find beginner, intermediate, expert, which we represent as L1, L2, L3, etc.We decided to go with six levels from L1 to L6 for individual contributors and six levels in management from team lead to C-level executive.
Our location index is based on the cost of living in a specific city (we rely on Numbeo for instance) and on the average salary for a position we hire (we use Glassdoor). Some cities are better providers of specific talents. By combining them, we get a more accurate location index.
When we are missing data for a specific city, we use the nearest one where we have data available.
Our reference is San Francisco, where the location index equals 1, meaning it’s basically the most expensive city in terms of hiring. For others, we have an index that can vary from 0.29 (Belgrade, Serbia) to 0.56 (Paris, France) to 0.65 (Toronto, Canada) etc. We now have 50+ locations in our salary calculator — a necessary consideration for our quickly growing, global team of full-time employees and contractors.

We rely on our strategic orientation to select which percentile we want to use in our databases. When we started Gorgias we were using the 50th percentile. As we grew (and raised funds), we wanted to be 100% sure that we were hiring the best people to build the best possible company.
High quality talent can be expensive (but not as expensive as making the wrong hires)! Obviously, we can’t pay everyone at the top of the market and align with big players like Google, but we can do our best to get close.
Since having the best product is our priority we pay our engineering and product team at the 90th percentile, meaning their pay is in the top 10% of the industry. We pay other teams at the 60th percentile.
Some other companies take into account additional criteria, such as company seniority. We believe seniority should reflect in equity, rather than in salary. If you apply seniority in the company index on salaries, eventually some of your team members will be inconsistent with the market. Those employees may stay in your company only because they won’t be able to find the same salary elsewhere.
Data is at the heart of our company DNA.
Where should you find your data? Data is everywhere! What matters most is the quality.
We look for the most relevant data on the market. If the database is not robust enough, we look elsewhere. So far we have managed to rely on several of them: Opencomp, Optionimpact, Figures.hr, and Pave are some major datasets we use for compensation. We’re curious and always looking for more. We’ll soon dig into Carta, Eon, and Levels. The more data we get, the more confident we are about the offers we make to our team.

Once we have the data, we apply our location index. It applies to both salaries and equity.
To build our equity package, we use the compensation and we then apply a “team” multiplier and a “level” multiplier. Those multipliers rely on data, of course. We’re using the same databases mentioned above and also on Rewarding Talent documentation for Europe.
As we mentioned above, once our tool was robust enough, we shared it internally.
To be honest, checking and checking again took longer than expected. But we all agreed that we’d rather release it to good reactions than rush it and create fear. We postponed the release for one month to check and double-check the results..
For the most effective release, we decided to do two things:
Overall, the reactions have been great. People loved the transparency and we got solid feedback.
We released the new calculator in September 2021, and overall we’re really happy with the response. We also had positive feedback from the update this month.
Let’s see how it goes with time.
Let’s be humble here: It’s only the beginning. It’s a Google Sheet. Of course, we’ll need to iterate on it.
In the meantime, you can check out the calculator here.
So far we’ve made plans to review the whole grid every year. However, now that it’s public within the teams, we can collect feedback and potentially make some changes. Everyone can add comments as they notice potential issues.
The next step for us is to share it online with everyone, on our website, so that candidates can have a vision of what we offer. We hope we’ll attract more talent thanks to this level of transparency and the value of our compensation packages.

I come from the world of physical retail where building a bond was more straightforward. We often celebrated wins with breakfast and champagne (yes, I’m French!) or by simply clapping our hands and making noise of joy.
We would also have lunch together every day, engaging in many informal discussions.
Of course, it bonded us! I knew my colleagues’ dog names and their plumber problems, and I felt really close to many of them.
Employee engagement is one of the primary drivers of productivity, work quality, and talent retention. When I joined Gorgias, where we have a globally distributed team, I wondered how you create the sense of belonging that drives that engagement
Like many companies now, our workforce is distributed. But at Gorgias, it’s a truly global affair: Our team lives in 17 countries, four continents, and many different time zones, which can be challenging.
And yet, I believe Gorgias culture is truly amazing and even better than the one I used to know.
I realize that we achieved that by relying on the critical ingredients of a strong relationship

By repeating these strong moments, you can make the connection between people stronger as well. The stronger the connection, the stronger the engagement.
Speaking of a strong engagement, Gorgias’s eNPS (employee Net Promoter Score) is 50. How is this possible? Well, what’s always quoted as one of our main strengths is the company culture, and how it connects our employees.
Let’s take it further by exploring five actionable steps we have taken to make that happen.
While some would push back against events like these falling under the purview of the People team, they are important for building strong culture, team cohesion, and employee happiness — all areas that are definitely part of our directive.
Here’s what you need to know to bring these summits to your organization.

As the name states, it’s a virtual event where the whole company connects.
It’s not mandatory, but it is highly recommended to attend because it’s fun and you learn many things.
It’s a mix of company updates, fun moments, and inspiring sessions. Each session is short, to let everyone the opportunity to breathe.
Typically we have three kinds of sessions:
Due to timezones, some sessions don’t include every country.


Our last virtual summit cost us roughly $13,000, which means $65 per head. Here’s the breakdown:
The first thing you might already have in mind is: It takes time! And you’re right.
The more we grow, the more challenging it becomes to organize these events.
I believe we’ll eventually need to have a dedicated event manager for all of our physical and virtual events. I want to have them within my team, and I 100% believe it’s worth it.
Another challenge can be technical difficulties with your event software choice, so make sure that you find a reliable platform that suits your needs.
Our team is a mix of hybrid and full-remote workers.
Since we don’t want the full-remote people to become disconnected, we highly encourage them to join the nearest hub once a quarter.
And when they do, we organize some happy hours, games or movie nights. Those face-to-face activities help create bonds between employees. It’s simple and doesn’t require a lot of organization, but it creates an incredible moment every time the remote teams join. We call them Gorgias Weeks.
We were fortunate to be able to organize our company offsite and gather a massive part of the crew together in October 2021.
The pandemic created doubt and additional points of stress, but looking back I’m so glad we were able to create an opportunity for everyone to meet in person.
We asked everyone to bring a health pass — full vaccination or PCR test — and we picked a location that allowed for a lot of outdoor activities.
We made sure the agenda for the two days was not too busy. As with our virtual summit, it was a balance of company alignment, learning, and fun. We made sure people had enough free time to relax, talk to each other, play games, or play sports.
This company offsite is surely an essential and strong moment for us and it helps create strong bonds and great memories.

We encourage every team to organize their own offsite for team-building purposes. Since people don’t meet a lot physically, having these once a year is great!
We let each team lead own it. They pick up the location and the agenda. Then, we provide guidelines with the budget.
Needless to say, it helps build stronger bonds and great memories.
In my experience, it was quite tough to create those moments internally with the team. That’s why we decided to start our team meeting with a fun activity of 10-15 minutes, where we are able to share more than just work.
Every week, there is a different meeting owner who has to come up with new fun activities and games. Starting the meeting with this kind of ice-breaking activity brings powerful energy, and people are more engaged and effective in the sessions. I would recommend it to everyone, especially to those who think, “We already have so many things to review in those weekly meetings, we don’t have time for that.” Try it once, you’ll see how the energy and productivity are different afterward.
On top of that, I also believe tools that encourage colleagues to randomly meet together are great. On our side we use Donut. It gives a weekly reminder that encourages employees to make it to their meeting with a colleague.
Overall, we’ve organized six virtual summits, four company retreats, three Gorgias weeks, and hundreds of virtual coffee and fun meetings.
At the beginning there were only 30 people in the company — now there are 200 of them. As I mentioned, it’s becoming more and more challenging to organize these meetups, but it’s also the most exciting part: making sure the next summit is better than the previous one!
Of course, I’m aware that employee fulfillment and connection are not the only ingredients for retention. But they are key ingredients and shouldn’t be forgotten, especially as we all become more remote.
It’s a worthy investment to organize these events and allocate resources to them, because it makes everyone at Gorgias feel included and connected. And I have no doubt, now, that it’s part of our responsibilities in People Ops.

When a customer's problem goes unanswered on Twitter, you lose that customer and possibly the audience of people who watched it happen.
It’s hard to come back from that, which is why customer care is so important on social media platforms. In fact, Shopify found 57% of North American consumers are less likely to buy if they can’t reach customer support in the channel of their choice.
Your customers want to talk to you — and you should want the same, before they head to a competitor. But first, you need to build a customer support presence on Twitter that lives up to your broader customer experience.
We've helped over 8,000 brands upgrade their customer support and seen the best and worst of social media interactions. Here are our top 10 battle-tested best practices for providing exceptional Twitter support.
Prompt response time is one of the most important pillars of great customer service, and according to data from a survey conducted by Twitter, 75% of customers on Twitter expect fast responses to their direct messages.
Of course, responding with accurate and helpful information is ultimately even more important than responding in real time, so be sure that you don't end up providing inaccurate information in a rush to reduce your response times.
Promptly and accurately responding to customer service issues that are sent to your company's Twitter account is often easier said than done. To do both, you need an efficient system and a well-trained customer support team.
This is where a helpdesk is critical, to bring your Twitter conversations into a central feed with all your other tickets.

If you’re trying to manage Twitter natively in a browser, or through copy-paste discussions with your social media manager, you’re not going to see the first-response times you need to succeed.
As data from Twitter's survey shows, speed is a necessity in order to meet customer expectations and provide a positive experience.
There may be instances where customers contact your Twitter support account via a mention in a tweet as opposed to a direct message. In fact, one in every four customers on Twitter will tweet publicly at brands in the hopes of getting a faster response according to data from Twitter. In these instances, it is important to move the conversation out of the public space as soon as possible by moving the conversation to the DMs.
There are a couple of reasons you would want to avoid resolving customer service issues on a public forum. For one, keeping customer service conversations private allows you to maintain better control over your brand voice and image since customer service conversations can often get a little messy and may not be something you want to broadcast to your entire audience.
Moving conversations out of the public space also enables you to collect more personal data from the customer such as their phone number or other contact information, details about their order and their credit card information without having to worry about privacy concerns.
In Gorgias, you can set up an auto-reply rule that responds to public support questions and directs them to send a DM for further help. This can ensure that people feel heard immediately, even if it takes a while for your team to get to their DM.
Regardless of whether you are discussing an issue with a customer via your Twitter account or any other medium, it is never a good idea for your reps to get into arguments with the customer.
Social media platforms such as Twitter tend to have a much more informal feel than other contact methods, and they also tend to sometimes bring out the worst in the people who hide behind the anonymity that they provide. You may end up finding that customers who contact you via Twitter are sometimes a little more argumentative than customers who contact you via more formal channels.
Nevertheless, it is essential for your Twitter support reps to maintain professionalism and avoid engaging in emotional arguments with customers. It may even help to establish guidelines for your team, to help deal with this type of customer tweet. You can include rules on emoji use, helpful quick-response scripts, and whatever other priorities you have.
Recommended reading: How to respond to angry customers
It is certainly possible to use Twitter alone when providing customer support via the platform. However, this isn't always the most efficient way to go about it.
Keep in mind that, like other social networks, Twitter wasn't necessarily designed to be a customer support channel. There aren't a lot of Twitter features beyond basic notifications that will be able to help your team organize support tickets.
Thankfully, there are third-party solutions that you can use that allow your support agents to respond to tweets and Twitter direct messages from your company website in a way that is much more organized and efficient. At Gorgias, for example, we offer a Twitter integration that will automatically create support tickets anytime someone mentions your brand, replies to your brand's tweets, or direct messages your brand. (By the way, we also offer integrations for Facebook Messenger and WhatsApp.)
Agents can then respond to these messages and mentions directly from the Gorgias platform, where they will show up in the same dashboard as the tickets from your other support channels.
This integration makes Twitter customer support far more efficient for your team and is one of the most effective ways to take your Twitter customer support services to the next level.

It is always important to respond to all questions and feedback that customers provide via Twitter, even if that feedback is negative. This is an important part of relationship marketing.
Many brands shy away from responding to negative feedback on public forums for fear of drawing more attention to the issue. However, this doesn't usually have the desired effect. Failing to respond to negative feedback can make it seem to anyone who happens to see the tweet in question that your brand is dodging the issue.
While you may wish to move the conversation out of the public space as soon as possible, you should always provide a public response to public feedback — negative or not.
For examples of brands effectively responding to negative tweets, check out this article.
According to data from Forbes, 86% of customers say that they would rather speak with a real human being than a chatbot. Even if you don't rely on chatbots for providing customer support, though, your customers may not be able to tell the difference unless you train your reps to be as personable as possible.
When your reps tailor their responses and connect on a personal level, it provides a much more positive support experience that provides a halo effect to your brand. Customers will remember that the next time they arrive at the checkout button, and they might even be open to upsell opportunities at that very moment.
Small businesses may not struggle to keep up with brand mentions, given that there are less to track. For larger companies, though, keeping up with brand mentions can often be a difficult task. This is especially true when some users tag brands with hashtags instead of handles.
This makes it important to create an effective strategy for tracking brand mentions in an efficient and organized manner. One of the best ways to go about this is to utilize integrations that will create a support ticket anytime a customer mentions your brand in a tweet. You can even create custom views in Gorgias to centralize all of these mentions.
By tracking these brand mentions, you can also retweet positive posts for brand awareness.

Not every customer service issue can be handled via Twitter. If there are certain types of issues that fall into that category for your brand, it's a good idea to keep your customers in the loop by providing concise FAQ guidelines that explain which issues you do and don't support via Twitter.
These guidelines can come in the form of a pinned Tweet at the top of your Twitter support account or an off-Twitter link that you provide to customers when they contact you on Twitter with an issue that requires a different medium for resolution. You could even have a visual you add when you respond to questions that don’t fit your guidelines.
Simply responding to customers and requesting that they direct message you for further assistance is another option for addressing issues that you don't want to handle on Twitter. If you set up the auto-reply we mentioned in the second tip, above, it could even include a link to these guidelines.
Check out what this brand did when contacted on Twitter with a problem that needed to be taken off-platform in order to be resolved.
If it makes sense for your brand, it may be a good idea to create multiple Twitter handles that are designated for sales, marketing, and customer support. Creating multiple Twitter handles that serve different purposes allows you to better organize your direct messages and mentions by breaking them down into different categories.
Having a designated customer support Twitter account can also better encourage customers to contact you via Twitter with their customer support issues since it reassures them that this is the purpose that the account serves.
But even then, some customers will still tweet at your main account with issues. When this happens, you can use intent and sentiment analysis in Gorgias to automatically route those issues to the correct agent or team.

When a customer takes the time to reach out to you on Twitter, whether it’s via direct message or a mention, it’s likely not the first time that customer has interacted with your brand.
If you respond on Twitter, you can see the direct message history on that platform, but that’s where the context ends. With Gorgias’s Twitter integration, you can see the full customer journey, including all social media engagement, support tickets across all of your channels and even past orders.
This context is crucial to understanding the conversation you’re walking into, so you can deal with the situation appropriately. If the person is a long-time customer who engages frequently, you’re going to treat that conversation differently than that of a customer who bashes you on social networks and returns products frequently.
Any customer support you provide through Twitter will make things more convenient and accessible for your audience.
But to make the experience faster and more pleasant on both sides of the conversation, you should consider handling all of your social media customer support in one platform, alongside all your other tickets.
Gorgias ties social handles to customer profiles from your Shopify, BigCommerce or Magento store, uniting relevant conversations from across all of your support channels. All of that info is automatically pulled into your response scripts, and you can even automate the process for no-touch ticket resolution.
Check out our social media features to learn more.


