Search our articles
Search

Featured articles

ai adoption trends

AI Is Table Stakes for Ecommerce: What the Data Tells Us About 2026

AI adoption in ecommerce has reached 96% in 2026, with use cases spanning support automation, personalization at scale, product discovery, and end-to-end operations.
By Gabrielle Policella
0 min read . By Gabrielle Policella

TL;DR:

  • AI adoption is rapidly accelerating. 96% of ecommerce professionals now use AI in their roles, up from 69% in 2024.
  • AI has moved beyond support automation. Use cases have evolved into revenue generation, personalization, and logistics.
  • Brands are tying AI success to profit-and-loss outcomes. 60% of brands consider AOV a top indicator of AI effectiveness.  

A year ago, ecommerce brands were still debating whether AI was worth the investment. That debate is over. Today, nearly every ecommerce professional uses AI to do their job.

The shift isn't just about adoption. It's about what AI is used for and how brands measure its impact. Support automation was the entry point. Now, AI is embedded across the full operation, from product recommendations to inventory control to real-time shopping conversations.

In our 2026 State of Conversational Commerce Report, we break down trends on AI usage among 400 ecommerce decision-makers and 16,000+ ecommerce brands using Gorgias. 

{{lead-magnet-1}}

AI adoption has reached a tipping point

If we rewind 12 months ago, the industry was still split on AI. Some ecommerce professionals were excited, but most were still hesitant. In 2024, 69% of ecommerce professionals used AI in their roles. By 2025, that number reached 77%. In 2026, it hit 96%.

Ecommerce professionals using AI: 69.2% in 2024, 77.2% in 2025, and 96% in 2026.

The confidence numbers back it up. 71% of brands say they are confident using AI for ecommerce, and 73% are satisfied with its business impact. 

In early 2025, only 30% of ecommerce professionals rated their excitement for AI at 10/10. Today, zero percent of respondents describe themselves as hesitant about AI. 

Views on AI among ecommerce professionals: 33% say it’s transforming their business, 50% see steady improvements, 18% say it hasn’t delivered, and 0% remain hesitant.

AI use cases now span the full ecommerce stack

Using AI in ecommerce is not new. In fact, it dates back to the 1980s with the invention of algorithms and expert systems. And if you’ve ever leveraged similar product recommendations or chatbots, you’ve already integrated AI into your ecommerce stack. 

Modern AI is far more sophisticated. 

With the rise of agentic commerce and conversational AI, brands began leveraging AI agents to automate the processing of repetitive support tickets. That’s still happening today, but the scope has expanded beyond the support queue. 

AI use cases in ecommerce include customer support automation (96%), product recommendations (88%), tracking updates (69%), personalization (64%), inventory control (51%), dynamic pricing (36%), and order fulfillment (18%).

Ecommerce brands are deploying AI across every layer of their operation:

  • Customer support automation: 96%
  • Product recommendations: 88%
  • Automated tracking and status updates: 69%
  • Personalization: 64%
  • Inventory control: 51%
  • Dynamic pricing and discounting: 36%
  • Order fulfillment: 18%

When brands were asked which channels contribute most to their AI success, conversational channels dominated. Social media messaging led at 78%, followed by SMS at 70%, and website live chat at 51%. Shoppers want fast, personal conversations, and AI is the best way to deliver that at scale.

Learn more about AI adoption, perception, and use case trends in the full 2026 Conversational Commerce Report.

How AI is changing CX success metrics

For decades, customer support success meant fast response times and high satisfaction scores. Those are still important indicators of success, but leading brands are adding revenue-focused metrics to their dashboards.   

91% of brands still track CSAT as a measure of AI's impact. But 60% now include AOV as a top indicator, and higher-revenue brands earning $20M+ are focusing on metrics like total operating expenses, cost per resolution, incremental revenue, and one-touch ticket rate.

AI impact measured by 91% customer satisfaction, 60% average order value, and 43% resolution time.

AI can now start a conversation, ease customer doubts, sell, upsell, and recover abandoned carts in a single conversation. When you’re only measuring CSAT, you’re ignoring the real ROI of conversational AI investment. 

AI makes every conversational channel a storefront

Virtual shopping assistants now proactively engage shoppers, adapt to their needs in real time, and offer contextual product recommendations and upsells. When the moment calls for it, they can close the deal with a targeted discount. 

Gorgias brands using AI Agent's shopping assistant capabilities nearly doubled their purchase rates and converted 20–50% better than those using AI Agent for support only.

Orthofeet, the largest provider of orthopedic footwear in the US, is a concrete example of this in practice. Using Gorgias, they achieved:

  • 56% of support tickets automated in 2 months
  • Email response times down from 24 hours to 35 seconds
  • Double-digit revenue growth without adding headcount. 

What this means for your AI strategy

The data tells a clear story: AI has evolved beyond a tool for handling tier 1 support tickets. It’s a core part of your revenue generation strategy. 

57% of brands are already using AI for 26–50% of all customer interactions, and 37% expect that share to rise to 51–75% within the next two years. The brands building toward that range now are the ones who will have the operational advantage when it matters most.

The practical question isn't whether to invest in AI. It's where to focus first. Based on where brands are seeing the most impact, three priorities stand out:

  • Start with high-volume, low-complexity tickets. WISMO (where is my order) inquiries, return policy questions, and order status updates are where AI delivers the fastest return. Automate these first.
  • Expand into conversational channels. Social messaging and SMS are where AI is driving the most success right now.
  • Connect AI performance to revenue metrics. If you're only measuring CSAT and response time, you're missing half the story. Add AOV, conversion rate, and incremental revenue to your reporting.

Want to go deeper on the full 2026 conversational commerce trends? Read the complete report for data across every major AI use case in ecommerce.

{{lead-magnet-1}}

min read.
Conversational Commerce Strategy

AI in CX Webinar Recap: Building a Conversational Commerce Strategy that Converts

By Gabrielle Policella
0 min read . By Gabrielle Policella

TL;DR:

  • Implement quickly and optimize continuously. Cornbread's rollout was three phases: audit knowledge base, launch, then refine. Stacy conducts biweekly audits and provides daily AI feedback to ensure responses are accurate and on-brand.
  • Simplify your knowledge base language. Before BFCM, Stacy rephrased all guidance documentation to be concise and straightforward so Shopping Assistant could deliver information quickly without confusion.
  • Use proactive suggested questions. Most of Cornbread's Shopping Assistant engagement comes from Suggested Product Questions that anticipate customer needs before they even ask.
  • Treat AI as another team member. Make sure the tone and language AI uses match what human agents would say to maintain consistent customer relationships.
  • Free up agents for high-value work. With AI handling straightforward inquiries, Cornbread's CX team expanded into social media support, launched a retail pop-up shop, and has more time for relationship-building phone calls.

Customer education has become a critical factor in converting browsers into buyers. For wellness brands like Cornbread Hemp, where customers need to understand ingredients, dosages, and benefits before making a purchase, education has a direct impact on sales. The challenge is scaling personalized education when support teams are stretched thin, especially during peak sales periods.

Katherine Goodman, Senior Director of Customer Experience, and Stacy Williams, Senior Customer Experience Manager, explain how implementing Gorgias's AI Shopping Assistant transformed their customer education strategy into a conversion powerhouse. 

In our second AI in CX episode, we dive into how Cornbread achieved a 30% conversion rate during BFCM, saving their CX team over four days of manual work.

Top learnings from Cornbread's conversational commerce strategy

1. Customer education drives conversions in wellness

Before diving into tactics, understanding why education matters in the wellness space helps contextualize this approach.

Katherine, Senior Director of Customer Experience at Cornbread Hemp, explains:

"Wellness is a very saturated market right now. Getting to the nitty-gritty and getting to the bottom of what our product actually does for people, making sure they're educated on the differences between products to feel comfortable with what they're putting in their body."

The most common pre-purchase questions Cornbread receives center around three areas: ingredients, dosages, and specific benefits. Customers want to know which product will help with their particular symptoms. They need reassurance that they're making the right choice.

What makes this challenging: These questions require nuanced, personalized responses that consider the customer's specific needs and concerns. Traditionally, this meant every customer had to speak with a human agent, creating a bottleneck that slowed conversions and overwhelmed support teams during peak periods.

2. Shopping Assistant provides education that never sleeps

Stacy, Senior Customer Experience Manager at Cornbread, identified the game-changing impact of Shopping Assistant:

"It's had a major impact, especially during non-operating hours. Shopping Assistant is able to answer questions when our CX agents aren't available, so it continues the customer order process."

A customer lands on your site at 11 PM, has questions about dosage or ingredients, and instead of abandoning their cart or waiting until morning for a response, they get immediate, accurate answers that move them toward purchase.

The real impact happens in how the tool anticipates customer needs. Cornbread uses suggested product questions that pop up as customers browse product pages. Stacy notes:

"Most of our Shopping Assistant engagement comes from those suggested product features. It almost anticipates what the customer is asking or needing to know."

Actionable takeaway: Don't wait for customers to ask questions. Surface the most common concerns proactively. When you anticipate hesitation and address it immediately, you remove friction from the buying journey.

3. Implementation follows a clear three-phase approach

One of the biggest myths about AI is that implementation is complicated. Stacy explains how Cornbread’s rollout was a straightforward three-step process: audit your knowledge base, flip the switch, then optimize.

"It was literally the flip of a switch and just making sure that our data and information in Gorgias was up to date and accurate." 

Here's Cornbread’s three-phase approach:

  1. Preparation. Before launching, Cornbread conducted a comprehensive audit of their knowledge base to ensure accuracy and completeness. This groundwork is critical because your AI is only as good as the information it has access to.
  2. Launch and training. After going live, the team met weekly with their Gorgias representative for three to four weeks. They analyzed engagements, reviewed tickets, and provided extensive AI feedback to teach Shopping Assistant which responses were appropriate and how to pull from the knowledge base effectively.
  3. Ongoing optimization. Now, Stacy conducts audits biweekly and continuously updates the knowledge base with new products, promotions, and internal changes. She also provides daily AI feedback, ensuring responses stay accurate and on-brand.

Actionable takeaway: Block out time for that initial knowledge base audit. Then commit to regular check-ins because your business evolves, and your AI should evolve with it.

Read more: AI in CX Webinar Recap: Turning AI Implementation into Team Alignment

4. Simple, concise language converts better

Here's something most brands miss: the way you write your knowledge base articles directly impacts conversion rates.

Before BFCM, Stacy reviewed all of Cornbread's Guidance and rephrased the language to make it easier for AI Agent to understand. 

"The language in the Guidance had to be simple, concise, very straightforward so that Shopping Assistant could deliver that information without being confused or getting too complicated," Stacy explains. When your AI can quickly parse and deliver information, customers get faster, more accurate answers. And faster answers mean more conversions.

Katherine adds another crucial element: tone consistency.

"We treat AI as another team member. Making sure that the tone and the language that AI used were very similar to the tone and the language that our human agents use was crucial in creating and maintaining a customer relationship."

As a result, customers often don't realize they're talking to AI. Some even leave reviews saying they loved chatting with "Ally" (Cornbread's AI agent name), not realizing Ally isn't human.

Actionable takeaway: Review your knowledge base with fresh eyes. Can you simplify without losing meaning? Does it sound like your brand? Would a customer be satisfied with this interaction? If not, time for a rewrite.

Read more: How to Write Guidance with the “When, If, Then” Framework

5. Black Friday results proved the strategy works under pressure

The real test of any CX strategy is how it performs under pressure. For Cornbread, Black Friday Cyber Monday 2025 proved that their conversational commerce strategy wasn't just working, it was thriving.

Over the peak season, Cornbread saw: 

  • Shopping Assistant conversion rate jumped from a 20% baseline to 30% during BFCM
  • First response time dropped from over two minutes in 2024 to just 21 seconds in 2025
  • Attributed revenue grew by 75%
  • Tickets doubled, but AI handled 400% more tickets compared to the previous year
  • CSAT scores stayed exactly in line with the previous year, despite the massive volume increase

Katherine breaks down what made the difference:

"Shopping Assistant popping up, answering those questions with the correct promo information helps customers get from point A to point B before the deal ends."

During high-stakes sales events, customers are in a hurry. They're comparing options, checking out competitors, and making quick decisions. If you can't answer their questions immediately, they're gone. Shopping Assistant kept customers engaged and moving toward purchase, even when human agents were swamped.

Actionable takeaway: Peak periods require a fail-safe CX strategy. The brands that win are the ones that prepare their AI tools in advance.

6. Strategic work replaces reactive tasks

One of the most transformative impacts of conversational commerce goes beyond conversion rates. What your team can do with their newfound bandwidth matters just as much.

With AI handling straightforward inquiries, Cornbread's CX team has evolved into a strategic problem-solving team. They've expanded into social media support, provided real-time service during a retail pop-up, and have time for the high-value interactions that actually build customer relationships.

Katherine describes phone calls as their highest value touchpoint, where agents can build genuine relationships with customers. “We have an older demographic, especially with CBD. We received a lot of customer calls requesting orders and asking questions. And sometimes we end up just yapping,” Katherine shares. “I was yapping with a customer last week, and we'd been on the call for about 15 minutes. This really helps build those long-term relationships that keep customers coming back."

That's the kind of experience that builds loyalty, and becomes possible only when your team isn't stuck answering repetitive tickets.

Stacy adds that agents now focus on "higher-level tickets or customer issues that they need to resolve. AI handles straightforward things, and our agents now really are more engaged in more complicated, higher-level resolutions."

Actionable takeaway: Stop thinking about AI only as a cost-cutting tool and start seeing it as an impact multiplier. The goal is to free your team to work on conversations that actually move the needle on customer lifetime value.

7. Continuous optimization for January and beyond

Cornbread isn't resting on their BFCM success. They're already optimizing for January, traditionally the biggest month for wellness brands as customers commit to New Year's resolutions.

Their focus areas include optimizing their product quiz to provide better data to both AI and human agents, educating customers on realistic expectations with CBD use, and using Shopping Assistant to spotlight new products launching in Q1.

Build your conversational commerce strategy now

The brands winning at conversational commerce aren't the ones with the biggest budgets or the largest teams. They're the ones who understand that customer education drives conversions, and they've built systems to deliver that education at scale.

Cornbread Hemp's success comes down to three core principles: investing time upfront to train AI properly, maintaining consistent optimization, and treating AI as a team member that deserves the same attention to tone and quality as human agents.

As Katherine puts it:

"The more time that you put into training and optimizing AI, the less time you're going to have to babysit it later. Then, it's actually going to give your customers that really amazing experience."

Watch the replay of the whole conversation with Katherine and Stacy to learn how Gorgias’s Shopping Assistant helps them turn browsers into buyers. 

{{lead-magnet-1}}

min read.
Make AI Sound More Human

Make AI Sound More Human: How to Avoid Robotic Replies in Customer Support

Learn how small tweaks can make AI sound human and build trust in customer support.
By Gorgias Team
0 min read . By Gorgias Team

TL;DR:

  • Train your AI on your brand voice. A clear voice guide that covers tone, style, and formality helps your AI sound more natural and aligned with your brand.
  • Add short delays before AI responds. A one- or two-second pause can make AI responses seem more thoughtful.
  • Avoid generic phrases. Swap out formal responses for on-brand language that sounds like a real person on your team.
  • Mention customer context in replies. Referencing order history or previous conversations makes AI sound more human and builds trust.
  • Balance automation with human support. Let customers know when they are speaking to AI and escalate to a human when needed to avoid frustration.

Your AI sounds like a robot, and your customers can tell.

Sure, the answer is right, but something feels off. The tone of voice is stiff. The phrases are predictable and generic. At most, it sounds copy-pasted. This may not be a big deal from your side of support. In reality, it’s costing you more than you think.

Recent data shows that 45% of U.S. adults find customer service chatbots unfavorable, up from 43% in 2022. As awareness of chatbots has increased, so have negative opinions of them. Only 19% of people say chatbots are helpful or beneficial in addressing their queries. The gap isn't just about capability. It's about trust. When AI sounds impersonal, customers disengage or leave frustrated.

Luckily, you don't need to choose between automation and the human touch. 

In this guide, we'll show you six practical ways to train your AI to sound natural, build trust, and deliver the kind of support your customers actually like.

1. Train your AI on your brand voice

The fastest way to make your AI sound more human is to teach it to sound like you. AI is only as good as the input you give it, so the more detailed your brand voice training, the more natural and on-brand your responses will be.

Start by building a brand voice guide. It doesn't need to be complicated, but it should clearly define how your brand communicates with customers. At minimum, include:

  • Tone: Is your brand warm and empathetic? Confident and cheeky? Straightforward and helpful?
  • Style: How does your brand write? What is your personality? Short or long sentences, contractions or not, punctuation choices, and overall rhythm.
  • Formality: Do you use slang? Emojis? Address customers as “you,” “y’all,” or something else?
  • Friendliness: How personable should your AI sound? Is it playful, or should responses stay neutral and professional?

Think of your AI as a character. Samantha Gagliardi, Associate Director of Customer Experience at Rhoback, described their approach as building an AI persona:

"I kind of treat it like breaking down an actor. I used to sing and perform for a living — how would I break down the character of Rhoback? How does Rhoback speak? What age are they? What makes the most sense?" 

Next step

✅ Create a brand voice guide with tone, style, formality, and example phrases.

2. Delay responses to mimic human behavior

Humans associate short pauses with thinking, so when your AI responds too quickly, it instantly feels unnatural.

Adding small delays helps your AI feel more like a real teammate.

Where to add response delays:

  • Before sharing info that would realistically take a moment to look up, e.g., order history
  • Before confirming an action like issuing a refund or applying a discount
  • Transitioning or escalating between steps or agents
  • Emotional messages, like customer complaints and product quality issues

Even a one- to two-second pause can make a big difference in a robotic or human-sounding AI.

Next step

✅ Add instructions in your AI’s knowledge base to include short response delays during key moments.

3. Avoid generic phrasing and canned language

Generic phrases make your AI sound like... well, AI. Customers can spot a copy-pasted response immediately — especially when it's overly formal.

That doesn't mean you need to be extremely casual. It means being true to your brand. Whether your voice is professional or conversational, the goal is the same: sound like a real person on your team.

Here's how to replace robotic phrasing with more brand-aligned responses:

Generic Phrase

More Natural Alternative

“We apologize for the inconvenience.”

“Sorry about that, we’re working on it now.” (friendly)
“Apologies for the trouble. We’re resolving this ASAP.” (professional)

“Your satisfaction is our top priority.”

“We want to make sure this works for you.” (friendly)
“Let us know how we can make this right.” (professional)

“Please be advised…”

“Just a quick heads up…” (friendly)
“For your reference…” (professional)

“Your request has been received.”

“Got it. Thanks for reaching out.” (friendly)
“We’ve received your request and will follow up shortly.” (professional)

“I will now review your request.”

“Let me take a quick look.” (friendly)
“I’m reviewing the details now.” (professional)

Next step

✅ Identify your five most common inquiries and give your AI a rewritten example response for each.

4. Use context to inform answers

One of the biggest tells that a response is AI-generated? It ignores what's already happened.

When your AI doesn't reference order history or past conversations, customers are forced to repeat themselves. Repetition can lead to frustration and can quickly turn a good customer experience into a bad one.

Great AI uses context to craft replies that feel personalized and genuinely helpful.

Here's what good context looks like in AI responses:

  • Order awareness: The AI knows the customer placed an order yesterday and provides an accurate delivery estimate without asking for the order number again.
  • Conversation continuity: If the customer reached out earlier that week from a different support channel, the AI references that interaction or picks up where things left off.
  • Customer type: First-time shopper? VIP? The AI adjusts tone and detail level accordingly.

Tools like Gorgias AI Agent automatically pull in customer and order data, so replies feel human and contextual without sacrificing speed.

Next step

✅ Add instructions that prompt your AI to reference order details and/or past conversations in its replies, so customers feel acknowledged.

5. Balance automation with human handoff

Customers just want help. They don't care whether it comes from a human or AI, as long as it's the right help. But if you try to trick them, it backfires fast. AI that pretend to be human often give customers the runaround, especially when the issue is complex or emotional.

A better approach is to be transparent. Solve what you can, and hand off anything else to an agent as needed.

When to disclose that the customer is talking to AI:

  • You can disclose it at the start of the conversation, or include a disclaimer in your chat widget, contact page, or help center to let customers know AI may assist
  • When the customer asks to speak to a human or expresses frustration
  • If the AI cannot fulfill the request and needs to escalate
  • Anytime the AI is making decisions, like issuing refunds or processing cancellations
  • When transitioning from AI to a human agent

For more on this topic, check out our article: Should You Tell Customers They're Talking to AI?

Next step

✅ Set clear rules for when your AI should escalate to a human and include handoff messaging that sets expectations and preserves context.

6. Add intentional imperfections to sound human

We're giving you permission to break the rules a little bit. The most human-sounding AI doesn't follow perfect grammar or structure. It reflects the messiness of real dialogue.

People don't speak in flawless sentences every time. We pause, rephrase, cut ourselves off, and throw in the occasional emoji or "uh." When AI has an unpredictable cadence, it feels more relatable and, in turn, more human.

What an imperfect AI could look like: 

  • Vary sentence length and structure. Some short and choppy, others long. 
  • Add subtle grammatical “mistakes” like sentence fragments or informal punctuation. 
  • Mix in casual phrasing or idioms where appropriate. 
  • Avoid mechanical-sounding transitions. 
  • Occasionally use filler phrases like "kinda," "just checking," or "I think."

These imperfections give your AI a more believable voice.

Next step

✅ Add instructions for your AI that permit variation in grammar, tone, and sentence structure to mimic real human speech.

Natural-sounding AI is easier to set up than you think

Human-sounding AI doesn’t require complex prompts or endless fine-tuning. With the right voice guidelines, small tone adjustments, and a few smart instructions, your AI can sound like a real part of your team.

Book a demo of Gorgias AI Agent and see for yourself.

{{lead-magnet-2}}

5 min read.
Create powerful self-service resources
Capture support-generated revenue
Automate repetitive tasks

Further reading

May 2022: Product Roundup

What’s New With Gorgias – May 2022 Product Updates

By Morgan Smith
4 min read.
0 min read . By Morgan Smith

Each month, our product team holds a casual, conversational event with our customers to demo new features, receive real-time feedback, and answer live Q&As. 

Watch the video recap here, or read on for a recap of the latest releases. 

1. SMS is officially live for all accounts (3:10) 

With this new channel, you can receive and respond to SMS and MMS messages within Gorgias. This makes it easy for your customers to communicate with your store while they’re on the go, and easy for your agents to provide fast, conversational support. 

SMS tickets shown in Gorgias feed

We’re releasing SMS this quarter as a free trial for every customer on every plan. Conversations will count toward your plan’s ticket count, but there are no additional charges for minutes, usage, phone numbers, etc. In the coming months, we’ll be assessing the best way to provide Voice and SMS so we can continue to innovate and build powerful new features for these channels.

2. SMS Pro-tip: Create a Gorgias Rule to set up a double opt-in (11:55) 

If you want customers to consent to receive SMS messages before your agents actually reply, you can do this with a simple Rule in Gorgias. Here’s what it would look like: 

Gorgias automation rule for an SMS double opt-in

Read this article for four more Gorgias Rules to help automate SMS

3. Agents will now receive browser notifications when a ticket is assigned to them (14:55) 

Browser notifications an agent would get using Gorgias

This is especially great for anyone who gets tickets assigned to them, but may not be looking at Gorgias throughout their entire workday. (Think managers, social media collaborators, etc.) 

To see these notifications, you may need to adjust your browser and/or computer settings. You can see an example for Chrome + Mac in our official Product Update

4. Quick response flows in self-service got a revamp  (21:48) 

Quick response flows bring in a critical component to self-service, creating more ways to engage with shoppers who visit your store online. We designed quick response flows with the guidance that 60% of the time, customers use chat to ask pre-purchase questions. Most successful merchants leverage their FAQ content to prompt conversation with quick response flows that result in generating revenue, trust and loyalty.

If you haven’t yet activated quick response flows, you’re in for a treat. With this revamp, you can now easily manipulate every step of the experience for quick response flows from self-service settings. Immediately under the Quick Response Flows tab, you can write in any question and answer you prefer and hit save. There is no other place or screen you’d need to navigate. Using the preview on the right, you can reassure the quality of the experience you want to create for your customers. 

Quick response flows in the Gorgias platform

If customers click on a quick response flow and find the information they need, this will not count towards your monthly ticket volume. 

If they click on a quick response flow  and select “No, I need more help” option, it will create a ticket for an agent to address. 

Best practices

It’s amazing when our merchants start using a feature and take it to the next level. We’ve seen some of the best practices to include creating unique tags for each quick response flow created (e.g. Quick_Response_Flow_1), then adding a corresponding view in Tickets. This way, you can track closely the conversations prompted by quick response flows and dedicate a select group of agents who are trained to expand on the subject and help your customers become fans. For more on this subject, check out Quick Response Flows help doc here

Customer Q&A (26:50) 

Tune into that timestamp if you want the full 25 minutes of customer-led questions and answers from our product team. Here were a few of the highlights!

Gorgias Phone vs Aircall: What are the differences? What’s the timeline for improvement for Gorgias Voice? (32:40) 

Gorgias phone is an easy way to add a basic phone line to your store. If you’re looking for advanced, full call center features, our partners like Aircall or RingCentral may be a better solution for you. 

For example, their phone-specific statistics are more in-depth than ours, but the ability to create a phone number and answer it in the Gorgias helpdesk is naturally easier with Gorgias. 

Our long-term vision for Gorgias Phone is not to fully compete with apps like Aircall, but rather to invest in ecommerce-specific solutions so you can provide the best voice support to your shoppers. 

What’s up with WhatsApp? (37:50)

It’s our next new channel, coming Q3! We have access to the API and are ready to start building at the end of the quarter. (Just need to polish up a few existing channel bugs first.) 

Any plans to integrate with Shopify Blogs? (43:15) 

Not yet, but we’d love to hear more feedback about this if it’s something you’re interested in! Submit this idea on our Product Roadmap to help us prioritize it. 

Join us for the next monthly product event

That completes our recap of our May customer product event. We hold these events once as a month as a way to review the latest releases and connect with our customers in real-time. It’s a favorite – from both customers, and the Gorgias team. 

If you’d like to sign up for the next one to attend live, you can register here. We’d love to have you join us! 

Voice Support Benefits

4 Benefits of Adding Voice Support to Your Ecommerce Store

By Morgan Smith
4 min read.
0 min read . By Morgan Smith

Wondering if your team should add voice support to your ecommerce channels this year? You’re not alone. 

Over 15% of our customers currently have a phone integration added to their account, thanks to the Gorgias Voice integration and partners like Aircall and RingCentral.

While voice support may feel like an “outdated” channel in the age of live chat and social media, this tells us that ecommerce support teams are increasingly finding value in offering it to their clients. 

Here are 4 benefits of adding voice support to your ecommerce store: 

  • You can achieve faster first response times and faster resolution times
  • It’s easier to express empathy with customers.
  • Having a phone number builds trust and brand quality. 
  • It makes your support more accessible. 

You can achieve faster first response times and faster resolution times. 

Phones are an immediate communication channel, so it’s not surprising that adding voice support can boost your first response time. What we weren’t expecting, however, was by how much: 

Our customers with phones have a first response time that’s 7x faster than merchants that don’t offer voice support. (30 minutes compared to 4 hours.) 

What’s even more important to note, however, is that adding voice support doesn’t decrease resolution time (like many support managers fear). In fact, it makes quite a positive impact: 

Our merchants using phones have an average resolution time that’s 34% faster than customers who don’t. 

So not only does this channel help you respond to customers faster, but it helps you resolve their issues faster. That means your team can work more efficiently and spend up to 66% less time resolving each ticket. (Imagine how that could help increase your store’s revenue!) 

It’s easier to express empathy with customers (which can lead to better Satisfaction scores).

Talking (literally) to shoppers and hearing their tone of voice is the best way your agents can adjust their responses to create a great customer experience. 

While you can do your best to read clues in email and chat, it’s always going to be easier to match the customer’s tone when actually listening to them on the phone. 

And when your agents can express empathy and solve the problem accordingly, you’ve got a better chance at getting that 5-star review and positive customer feedback. 

Our customers using phones have an average Satisfaction score of 4.56 out of 5. 

While that score also depends a lot on your support agents and their personal approach to customer service, there’s no denying that actually speaking to clients is helpful for both parties in those moments.

Having a phone number builds trust and brand quality. 

Especially if you sell high-end products or have VIP customers (like wholesalers buying in bulk), having a phone number adds a level of legitimacy to your business. 

Since most online stores don’t immediately add phones as a support channel, it will stand out to customers when your shop does offer voice support. 

Phones add a sense of maturity to your business (and especially if you’re using an integrated solution like Gorgias Voice), there’s not much cost involved to elevate the status of your store like this. 

It makes your support more accessible. 

While the internet has come a long way over the years in terms of accessibility, the truth remains that phone support may be an easier and more comfortable contact method for some of your customers than digital channels. 

Test your live chat experience with a screen reader, for example. What’s the experience like? (And how does it compare to dialing a phone number and talking verbally to someone?) 

If there’s a chance that voice support is more approachable for a part of your customer demographic, you’ll create a better shopping experience for them by adding a phone line. 

Now that you know the benefits of phone support, how do you actually add it? 

The first thing you’ll need to decide is who on your team will actually be answering the phones. 

A few options to explore: 

  1. Having your existing support agents answer phones. This is best if your agents aren’t already too busy, or you have someone who’s particularly good at verbal communication. 
  2. Hiring a new agent(s). This is the situation many support managers find themselves in -- they want to add phones, but don’t feel like they have the right staff yet to manage it. Hiring someone new can help, but we also recommend following these tips to keep resolution times fast and phone processes efficient. 
  3. Outsourcing phone support. If you’re expecting a large amount of call volume and don’t feel you can internally staff the team to support it, outsourcing to a call agency is always an option. This can be expensive upfront, however, so it may be best to try one of the other options and consider this as a last resort. 

Next, you’ll need to choose a phone platform. 

If you’re adding our built-in voice channel to your Gorgias helpdesk, all you have to do to get started is log into your Gorgias helpdesk and create a new number (or forward or port an existing one, if you happen to have one already). 

Our phone integration is included in all Gorgias plans, and unlike other providers, there’s no annual contract fee and no minimum seat requirement. 

This makes it a great option for teams looking to add phones for the first time or who want to manage all communication channels in one place. 

Plus, our ecommerce integrations save your agents time by displaying callers’ shopping history right in the helpdesk, so they don’t have to go searching for the last order, for example. 

For more tips on how to create efficient phone processes and increase resolution time by 34%, check out this article

Finally, once you’ve set up your team and chosen your provider, all that’s left to do is make your number visible. 

If you’re offering voice support for all your customers, you might place it in the footer of your website or all transactional emails. 

If you’re piloting voice support or using it exclusively for a segment of shoppers, you might save it for smaller email segments or place it only on dedicated landing pages just for them. 

Wherever you decide to put your number, just make sure it's easily accessible and clearly visible so your shoppers can start calling, and your support team can start delivering even better customer experiences!

Start SMS Support

Start providing SMS support today, with Gorgias

By Morgan Smith
4 min read.
0 min read . By Morgan Smith

SMS is a convenient way for customers to contact your brand and receive fast support. It’s no wonder it’s one of the top five channels that consumers expect to engage with brands, alongside email, voice, website, and in-person. 

Every Gorgias plan now includes two-way SMS at no additional cost, making it easy for your brand to start offering this conversational channel.

Why offer SMS support?

There are many reasons to offer customer service messaging, but here are the top four:

It’s fast and conversational

SMS is a conversational, real-time channel. The benefit of this is that customers tend to keep the conversation short and reply quickly to follow-up questions, meaning your agents can resolve the situation quickly, too. 

Customers can contact you while they’re “on the go” 

Most people keep their phone with them everywhere they go. With SMS, it’s easy for customers to start the conversation and follow-up as they move throughout their day, instead of feeling stuck to a chat conversation on their laptop. 

It’s natural for younger customers

Sending text messages feels like you’re texting a friend, even if it’s actually between customers and your brand. Younger clientele will feel natural using this support channel, and it can even help you build that friendly-feeling into your brand perception. 

It makes sending photos back and forth easy

Does your refund or return policy require photo evidence to kick off the process? If your customers ever need to send pictures of damaged items or wrong products, SMS is the perfect channel because they’re probably taking those photos on their phone anyway. 

Still not sure if SMS is a support channel your brand should prioritize? Try it for 2 weeks. Because SMS is included in every Gorgias plan, it’s easy to turn off if you decide it isn’t right. 

Recommended reading: Our list of 60+ fascinating customer service statistics.

How to add SMS to your helpdesk 

You’ll need two things to get started with Gorgias SMS. (Don’t worry, they’re both quick!) 

If you’re new here, get started on the  Gorgias helpdesk. It only takes a few minutes to create an account, and you can always book a call with our sales team if you have questions. 

The second is a Gorgias-owned phone number, meaning you either created it in Gorgias or ported it from your previous phone provider. You can do both of these actions in Settings > Phone Numbers

Note: SMS is currently only available for US, UK, and Canadian numbers. 

Once your phone number is ready in Gorgias, you can add the SMS integration to it. You can do this from Settings > Integrations > SMS

Once the integration is active, you’re ready to start replying to SMS conversations from your customers. 

To tell your customers they can now text your brand, we recommend adding “Text us,” plus your phone number, in some or all of these places: 

  • The footer of your website
  • The “Contact Us” page of your website
  • Your Gorgias Help Center
  • Transactional emails (order confirmation, return initiated, etc.)

4 automation Rules to help you get started 

Below are four top automation rules to take full advantage of SMS customer service. We also have a full guide on customer service messaging that includes templates and macros to upgrade your SMS support.

Auto-tag with “SMS”

SMS is an official channel in Gorgias, meaning you can see SMS-specific stats or create SMS-specific Views out of the box. There may be times when you also want to Tag tickets with “SMS” however, in which case you can do so with a Rule like this: 

Auto-assign to a real-time team

SMS is a fast, conversational channel, so you’ll want to assign these tickets to agents that can keep up with the pace. If you have a dedicated chat team, they’ll be naturals at answering questions via SMS, as well. Here’s a Rule that will automatically assign SMS tickets to a specific team. 

Auto-reply: Message received

When customers text your brand, they’ll expect a fast response. In order to buy your agents some time, we recommend sending an auto-response to let the customer know their message has been received and an agent will be with them shortly. This will also give them confidence that the text message did in fact go through, so they don’t follow-up right away. 

Auto-reply: Order status

Whenever you add a new communication channel for your customers, you should consider how you’ll respond to WISMO (“Where is my order?”) questions on it. With SMS, you’ll want to keep the length of your reply in mind so you’re not sending an insanely long text message back to customers. We recommend creating a Rule that can A) make sure the reply follows the best format for SMS and B) save your agents from having to answer these WISMO questions manually.

Next: Connect your SMS marketing apps for a seamless experience

Gorgias SMS empowers your brand to keep the conversation going on SMS, even when your customers are on the go.

We also integrate with SMS marketing apps, making it easier for agents to answer promotion replies from one workspace. They can work more efficiently while turning SMS questions into opportunities for better customer value. 

In the Gorgias App Store, you’ll find some of the top ecommerce integration partners like Klaviyo, Attentive, Postscript, and more. 

If your brand is using any of these apps to drive sales via SMS, we highly recommend integrating with Gorgias so your team can work more efficiently toward your revenue goals. When SMS marketing and SMS customer service work in tandem, they are far more powerful.

Want to see an example of a brand that successfully launched SMS customer support and effectively drove customers to use the new channel? Check out our playbook of Berkey Filters, an ecommerce merchant that did just that.

Ready to get started with this conversational support channel? Add SMS to your Gorgias helpdesk today or book a call with our team to learn more.

Continuous Deployment

Leveraging Automation on Our Path to Continuous Deployment and GitOps

By Vincent Gilles
9 min read.
0 min read . By Vincent Gilles

As we all locked down in March 2020 and changed our shopping habits, many brick-and-mortar retailers started their first online storefronts. 

Gorgias has benefitted from the resulting ecommerce growth over the past two years, and we have grown the team to accommodate these trends. From 30 employees at the start of 2020, we are now more than 200 on our journey to delivering better customer service.

Our engineering team contributed to much of this hiring, which created some challenges and growing pains. What worked at the beginning with our team of three did not hold up when the team grew to 20 people. And the systems that scaled the team to 20 needed updates to support a team of 50. To continue to grow, we needed to build something more sustainable.

Continuous deployment — and the changes required to support it — presented a major opportunity for reaching toward the scale we aspired to. In this article I’ll explore how we automated and streamlined our process to make our developers’ lives easier and empower faster iteration.

Scaling our deployment process alongside organizational growth

Throughout the last two years of accelerated growth, we’ve identified a few things that we could do to better support our team expansion. 

Before optimizing the feature release process, here’s how things went for our earlier, smaller team when deploying new additions:

  1. Open a pull request (PR) on GitHub, which would run our tests in our continuous integration (CI) system
  2. Merge those changes into the main branch, once the changes are approved
  3. Automatically deploy the new commit in the staging/testing environment, after tests run and pass on the main branch
  4. Deploy these changes in our production environment, assuming all goes well up until this point
  5. Post on the dedicated Slack channel to inform the team of the new feature, specifying the project deployed and attaching a screenshot of all commits since the last deployment.
  6. Watch dashboards for any changes — as a failsafe to back up the alerts that were already triggering — to check if the change needed to be rolled back.

This wasn’t perfect, but it was an effective solution for a small team. However, the accelerated growth in the engineering team led to a sharp increase in the number of projects and also collaborators on each project. We began to notice several points of friction:

  • The process was slow and painful. The continuous integration and continuous deployment (CI/CD) systems are meant to speed the process up, but we still need to perform rigorous testing. We needed to find the sweet spot between speed and rigorous testing and we believed both aspects left room for improvement.
  • Developers didn’t always take full ownership of their changes. When a change wasn’t considered critical (which happened fairly often), a developer would often let the next developer with a critical change deploy multiple commits at the same time. When problems occurred, this made it much more difficult to diagnose the bad commit.
  • It was a challenge to track version changes. To track the version of a service that was deployed in production, you had to either check our Kubernetes clusters directly or go through the screenshots in our dedicated Slack channel.
  • Each project had its own set of scripts to help with deployment. We wanted to streamline our deployment process and add some consistency across all projects.

It was clear that things needed to change.

Adjusting practices and tools to lay the foundation for implementing GitOps

On the Site Reliability Engineering (SRE) team, we are fans of the GitOps approach, where Git is the single source of truth. So when the previously mentioned points of friction became more critical, we felt that all the tooling involved in GitOps practices could help us find practical solutions.

Additionally, these solutions would often rely on tooling we already had in place (like Kubernetes, or Helm for example).

What is GitOps?

GitOps is an operational framework. It takes application-development best practices and applies them to infrastructure automation. 

The main takeaway is that in a GitOps setting, everything from code to infrastructure configuration is versioned in Git. It is then possible to create automation by leveraging the workflows associated with Git. 

What are the benefits of implementation?

One such class of that automation could be “operations by pull requests”. In that case, pull requests and associated events could trigger various operations. 

Here are some examples:

  • Opening a pull request could build an application and deploy it to a preview environment
  • You could add a commit to said pull request to rebuild the application and update the container image’s version in the preview environment
  • By merging the pull request, you could trigger a workflow that would result in the new changes being deployed in a live production environment

Using ArgoCD as a building block

ArgoCD is a continuous deployment tool that relies on GitOps practices. It helps synchronize live environments and services to version-controlled declarative service definitions and configurations, which ArgoCD calls Applications. 

In simpler terms, an Application resource tells ArgoCD to look at a Git repository and to make sure the deployed service’s configuration matches the one stored in Git.

The goal wasn’t to reinvent the wheel when implementing continuous deployment. We instead wanted to approach it in a progressive manner. This would help build developer buy-in, lay the groundwork for a smoother transition, and reduce the risk of breaking deploys. ArgoCD was an excellent step toward those goals, given how flexible it is with customizable Config Management Plugins (CMP).

ArgoCD can track a branch to keep everything up to date with the last commit, but can also make sure a particular revision is used. We decided to use the latter approach as an intermediate step, because we weren’t quite ready to deploy off the HEAD of our repositories. 

The only difference from a pipeline perspective is that it now updates the tracked revision in ArgoCD instead of running our complex deployment scripts. ArgoCD has a Command Line Interface (CLI) that allows us to simply do that. Our deployment jobs only need to run the following command:

The developers’ workflow is left untouched at this point. Now comes the fun part.

Building automation into our process to move faster

Our biggest requirement for continuous deployment was to have some sort of safeguard in case things went wrong. No matter how much we trust our tests, it is always possible that a bug makes its way to our production environments.

Before implementing Argo Rollouts, we still kept an eye on the system to make sure everything was fine during deployment and took quick action when issues were discovered. But up to that point, this process was carried out manually. 

It was time to automate that process, toward the goal of raising our team’s confidence levels when deploying new changes. By providing a safety net, of sorts, we could be sure that things would go according to plan without manually checking it all.

Argo Rollouts can revert changes automatically, when issues arise

Argo Rollouts is a progressive delivery controller. It relies on a Kubernetes controller and set of custom resource definitions (CRD) to provide us with advanced deployment capabilities on top of the ones natively offered by Kubernetes. These include features like:

  • Blue/Green, which consists of deploying all the new instances of our application alongside the old version without sending traffic to it at first. We can then run some tests on the new version and flip the switch when we made sure everything was fine. Once no more traffic is sent to the old version, we can tear it down.
  • Canary deployments, which allow us to start by only deploying a small number of replicas, using the new version of our software. This way, we’re able to shift a small portion of traffic to the new version. We can do multiple steps here and only shift 1% of the traffic towards the new version at first. Then 10%, 50% or even more depending on what we try to achieve.
  • Analyzing new deployments’ performance. Argo Rollouts allows us to automate some checks as we are rolling out a new version of our software. To do that, we describe such checks in an AnalysisTemplate resource, which Argo Rollouts will use to query our metric provider and make sure everything is fine.
  • Experiments, which are another resource Argo Rollouts introduces to allow for short-lived experiments such as A/B testing.
  • Progressive delivery in Kubernetes clusters by managing the entire rollout process and allowing us to describe the desired steps of a rollout. It allows us to set a weight for a canary deployment (the ratio between pods running the new and the old versions), perform an analysis, or even pause a deployment for a given amount of time or until manual validation.
Argo Rollouts dashboard view of our awesome-service rollout. On the left we can see the current version is stable and on the right we can see the different steps during the rollout process, top to bottom.

We were especially interested in the canary and canary analysis features. By shifting only a small portion of traffic to the new version of an application, we can limit the blast radius in case anything is wrong. Performing an analysis allows us to automatically, and periodically, check that our service’s new version is behaving as expected before promoting this canary. 

Argo Rollouts is compatible with multiple metric providers including Datadog, which is the tool we use. This allows us to run a Datadog query (or multiple) every few minutes and compare the results with a threshold value we specify. 

We can then configure Argo Rollouts to automatically take action, should the threshold(s) be exceeded too often during the analysis. In those cases, Argo Rollouts scales down the canary and scales the previous stable version of our software back to its initial number of replicas.

Argo rollouts in action, stopping a bad deploy that would have certainly caused a large problem

Each service has its own metrics to monitor, but for starters we added an error rate check for all of our services.

Creating a deployment conductor to simplify configuration and deployment management

Remember when I mentioned replacing complex, project-specific deployment scripts with a single, simple command? That’s not entirely accurate, and requires some additional nuance for a full understanding.

Not only did we need to deploy software on different kinds of environments (staging and production), but also in multiple Kubernetes clusters per environment. For example, the applications composing the Gorgias core platform are deployed across multiple cloud regions all around the world.

ArgoCD and Argo Rollouts might seem to be magic tools, we actually still need some “glue” to make things stick together. Now because of ArgoCD’s application-based mechanisms, we were able to get rid of custom scripts and use this common tool across all projects. This in-house tool was named deployment conductor.

We even went a step further and implemented this tool in a way that accepts simple YAML configuration files. Such files allow us to declare various environments and clusters in which we want each individual project to be deployed.

When deploying a service to an environment, our tool will then go through all clusters listed for that environment.

For each of these, it will look for dedicated values.yaml files in the service’s chart’s directory. This allows developers to change a service’s configuration based on the environment and cluster in which it’s deployed. Typically, they would want to edit the number of replicas for each service depending on the geographical region.

This makes it much easier for developers than having to manage configuration and maintain deployment scripts.

Enabling continuous deployment

This leads us to the end of our journey’s first leg: our first encounter with continuous deployment.

After we migrated all our Kubernetes Deployments to Argo Rollouts, we let our developers get acclimated for the next few weeks. 

Our new setup still wasn’t fully optimized, but we felt like it was a big improvement compared to the previous one. And while we could think of many improvements to make things even more reliable before enabling continuous deployment, we decided to get feedback from the team during this period, to iterate more effectively.

Some projects introduced additional technicalities to overcome, but we easily identified a small first batch of projects where we could enable CD. Before deployment, we asked the development team if we were missing anything they needed to be comfortable with automatic deployment of their code in production environments. 

With everyone feeling good about where we were at, we removed the manual step in our CI system (GitLab) for jobs deploying to production environments.

Next steps on the path to continuous deployment

We’re still monitoring this closely, but so far we haven’t had any issues. We still plan on enabling continuous deployment on all our projects in the near future, but it will be a work in progress for now.

Here are some ideas for future improvements that anticipate potential roadblocks:

  • Some projects still require additional safeguards before continuous deployment. Automating database migrations is one of our biggest challenges. Helm pre-upgrade hooks would allow us to check if a migration is necessary before updating an application and run it when appropriate. But when automating these database migrations, the tricky part is avoiding heavy locks on critical tables.
  • It still isn’t that easy to track what version of a service is currently deployed. When things go according to plan, the last commit in the main branch should either be deployed or currently deploying. To solve this, we could go a step further and version the state of each application for each cluster, including the version identifier for the version that should be deployed. We’re also monitoring the Argo image updater repository closely. When a stable version is released, it could help us detect new available versions for services, deploy them, and update the configuration in Git automatically.
  • When there are multiple clusters per environment with the same services deployed, we end up with too many ArgoCD applications. One thing we could do is use the “app of apps” pattern and manage a single application to create all the other required applications for a given service.
  • On the bigger projects, the volume of activity may require the queuing of deployments. In fact, if two people merge changes in the main branch around the same time, there could be issues. The last thing we want is for the last commit to be deployed and then replaced by the commit preceding it.

We’re excited to explore these challenges. And, overall, our developers have welcomed these changes with open arms. It helps that our systems have been successful at stopping bad deployments from creating big incidents so far. 

While we haven’t reached the end of our journey yet, we are confident that we are on the right path, moving at the right pace for our team.

Prevent Idle In Transaction

Avoiding idle-in-transaction connection states with SQLAlchemy

By Gorgias Engineering
10 min read.
0 min read . By Gorgias Engineering

As you work with SQLAlchemy, over time, you might have a performance nightmare brewing in the background that you aren’t even aware of.

In this lesser-known issue, which strikes primarily in larger projects, normal usage leads to an ever-growing number of idle-in-transaction database connections. These open connections can kill the overall performance of the application.

While you can fix this issue down the line, when it begins to take a toll on your performance, it takes much less work to mitigate the problem from the start.

At Gorgias, we learned this lesson the hard way. After testing different approaches, we solved the problem by extending the high-level SQLAlchemy classes (namely sessions and transactions) with functionality that allows working with "live" DB (database) objects for limited periods of time, expunging them after they are no longer needed.

This analysis covers everything you need to know to close those unnecessary open DB connections and keep your application humming along.

The problem: your database connection states are monopolizing unnecessary resources

Leading Python web frameworks such as Django come with an integrated ORM (object-relational mapping) that handles all database access, separating most of the low-level database concerns from the actual user code. The developer can write their code focusing on the actual logic around models, rather than thinking of the DB engine, transaction management or isolation level.

While this scenario seems enticing, big frameworks like Django may not always be suitable for our projects. What happens if we want to build our own starting from a microframework (instead of a full-stack framework) and augment it only with the components that we need?

In Python, the extra packages we would use to build ourselves a full-fledged framework are fairly standard: They will most likely include Jinja2 for template rendering, Marshmallow for dealing with schemas and SQLAlchemy as ORM.

Request-response paradigm vs. background tasks

Not all projects are web applications (following a request-response pattern) and among web applications, most of them deal with background tasks that have nothing to do with requests or responses.

This is important to understand because in request-response paradigms, we usually open a DB transaction upon receiving a request and we close it when responding to it. This allows us to associate the number of concurrent DB transactions with the number of parallel HTTP requests handled. A transaction stays open for as long as a request is being processed, and that must happen relatively quickly — users don't appreciate long loading times.

Transactions opened and closed by background tasks are a totally different story: There's no clear and simple rule on how DB transactions are managed at a code level, there's no easy way to tell how long tasks (should) last, and there usually isn't any upper limit to the execution time.

This could lead to potentially long transaction times, during which the process effectively holds a DB connection open without actually using it for the majority of the time period. This state is known as an idle-in-transaction connection state and should be avoided as much as possible, because it blocks DB resources without actively using them.

The limitations of SQLAlchemy with PEP-249

To fully understand how database access transpires in a SQLAlchemy-based app, one needs to understand the layers responsible for the execution.

Layers of execution in an SQLAlchemy app

At the highest level, we code our DB interaction using high-level SQLAlchemy queries on our defined models. The query is then transformed into one or more SQL statements by SQLAlchemy's ORM which is passed on to a database engine (driver) through a common Python DB API defined by PEP-249. (PEP-249 is a Python Enhancement Proposal dedicated to standardizing Python DB server access.) The database engine communicates with the actual database server.

At first glance, everything looks good in this stack. However there's one tiny problem: The DB API (defined by PEP-249) does not provide an explicit way of managing transactions. In fact, it mandates the use of a default transaction regardless of the operations you're executing, so even the simplest select will open a transaction if none are open on the current connection.

SQLAlchemy builds on top of PEP-249, doing its best to stay out of driver implementation details. That way, any Python DB driver claiming PEP-249 compatibility could work well with it.

While this is generally a good idea, SQLAlchemy has no choice but to inherit the limitations and design choices made at the PEP-249 level. More precisely (and importantly), it will automatically open a transaction for you upon the very first query, regardless whether it’s needed. And that's the root of the issue we set out to solve: In production, you'll probably end up with a lot of unwanted transactions, locking up on DB resources for longer than desired.

Also, SQLAlchemy uses sessions (in-memory caches of models) that rely on transactions. And the whole SQLAlchemy world is built around sessions. While you could technically ditch them to avoid the idle-in-transactions problem with a “lower-level” interface to the DB, all of the examples and documentation you’ll find online uses the “higher-level” interface (i.e. sessions). It’s likely that you will feel like you are trying to swim against the tide to get that workaround up and running.

Postgres and the different types of autocommits

Some DB servers, most notably Postgres, default to an autocommit mode. This mode implies atomicity at the SQL statement level — something developers are likely to expect. But they prefer to explicitly open a transaction block when needed and operate outside of one by default.

If you're reading this, you have probably already Googled for "sqlalchemy autocommit" and may have found their official documentation on the (now deprecated) autocommit mode. Unfortunately this functionality is a "soft" autocommit and is implemented purely in SQLAlchemy, on top of the PEP-249 driver; it doesn't have anything to do with DB's native autocommit mode.

This version works by simply committing the opened transaction as soon as SQLAlchemy detects an SQL statement that modifies data. Unfortunately, that doesn't fix our problem; the pointless, underlying DB transaction opened by non-modifying queries still remains open.

When using Postgres, we could in theory play with the new AUTOCOMMIT isolation level option introduced in psycopg2 to make use of the DB-level autocommit mode. However this is far from ideal as it would require hooking into SQLAlchemy's transaction management and adjusting the isolation level each time as needed. Additionally, "autocommit" isn't really an isolation level and it’s not desirable to change the connection's isolation level all the time, from various parts of the code. You can find more details on this matter, along with a possible implementation of this idea in Carl Meyer's article “PostgreSQL Transactions and SQLAlchemy.”

At Gorgias, we always prefer explicit solutions to implicit assumptions. By including all details, even common ones that most developers would assume by default, we can be more clear and leave less guesswork later on. This is why we didn't want to hack together a solution behind the scenes, just to get rid of our idle-in-transactions problem. We decided to dig deeper and come up with a proper, explicit, and (almost) hack-free method to fix it.

Visualizing an idle-in-transaction case

The following chart shows the profile of an idle-in-transaction case over a period of two weeks, before and after fixing the problem.

Visualizing idle-in-transaction, before and after

As you can see, we’re talking about tens of seconds during which connections are being held in an unusable state. In the context of a user waiting for a page to load, that is an excruciatingly long period of time.

The solution: expunged objects and frozen models

Expunging objects to prevent long-lasting idle connections

SQLAlchemy works with sessions that are, simply put, in-memory caches of model instances. The code behind these sessions is quite complex, but usage boils down to either explicit session reference...

...or implicit usage.

Both of these approaches will ensure a transaction is opened and will not close it until a later ***session.commit()***or session.rollback(). There's actually nothing wrong with calling session.commit() when you need to explicitly close a transaction that you know is opened and you’re done with using the DB, in that particular scope.

To address the idle-in-transaction problem generated by such a line, we must keep the code between the query and the commit relatively short and fast (i.e. avoid blocking calls or CPU-intensive operations).

It sounds simple enough, but what happens if we access an attribute of a DB model after session.commit()? It will open another transaction and leave it hanging, even though it might not need to hit the DB at all.

While we can't foresee what a developer will do with the DB object afterward, we can prevent usage that would hit the DB (and open a new transaction) by expunging it from the session. An expunged object will raise an exception if any unloaded (or expired) attributes are accessed. And that’s what we actually want here: to make it crash if misused, rather than leaving idle-in-transaction connections behind to block DB resources.

Building an expunging context manager to handle transactions and connections

When working with multiple objects and complex queries, it’s easy to overlook the necessary expunging of those objects. It only takes one un-expunged object to trigger the idle-in-transaction problem, so you need to be consistent.

Objects can't be used for any kind of DB interaction after being expunged. So how do we make it clear and obvious that certain objects are to be used in within a limited scope? The answer is a Python context manager to handle SQLAlchemy transactions and connections. Not only does it allow us to visually limit object usage to a block, but it will also ensure everything is prepared for us and cleaned up afterwards.

The construct above normally opens a transaction block associated to a new SQLAlchemy session, but we've added a new expunge keyword to the begin method, instructing SQLAlchemy to automatically expunge objects associated with block's session (the tx.session). To get this kind of behavior from a session, we need to override the begin method (and friends) in a subclass of SQLAlchemy's Session.

We want to keep the default behavior and use a new ExpungingTransaction instead of SQLAlchemy's SessionTransaction, but only when explicitly instructed to by the expunge=True argument.

You can use the class_ argument of sessionmaker to instruct it to build am ExpungingSession instead of a regular Session.

The last piece of the puzzle is the ExpungingTransaction code, which is responsible for two important things: committing the session so the underlying transaction gets closed and expunging objects so that we don't accidentally reopen the transaction.

By following these steps, you get a useful context manager that forces you to group your DB interaction into a block and notifies you if you mistakenly use (unloaded) objects outside of it.

Using frozen models to deal with expunged objects

What if we really need to access DB models outside of an expunging context?

Simply passing models to functions as arguments helps in achieving a great goal: the decoupling of models retrieval from their actual usage. However, such functions are no longer in control of what happens to those models afterwards

We don't want to forbid all usage of models outside of this context, but we need to somehow inform the user that the model object comes “as is,” with whatever loaded attributes it has. It's disconnected from the DB and shouldn't be modified.

In SQLAlchemy, when we modify a live model object, we expect the change to be pushed to the DB as soon as commit or flush is called on the owning session. With expunged objects this is not the case, because they don't belong to a session. So how does the user of such an object know what to expect from a certain model object? The user needs to ensure that she:

  • Doesn't access an unloaded attribute of a live DB object, as it may open an unwanted transaction
  • Doesn't modify attributes of an expunged object, as it won't be saved

To safely and explicitly pass along these kind of model objects, we introduced frozen objects. Frozen objects are basically proxies to expunged models that won't allow any modification.

To work with these frozen objects, we added a freeze method to our ExpungingSession:

So now our code would look something like this:

Now, what if we want to modify the object outside of this context, later on, (e.g. after a long-lasting HTTP request)? As our frozen object is completely disconnected from any session (and from the DB), we need to fetch a warm instance associated to it from the DB and make our changes to that instance. This is done by adding a helper fetch_warm_instance method to our session...

...and then our code that modifies the object would say something like this.

When the second context manager exits, it will call commit on tx.session, and changes to my_model will be committed to the DB right away.

Frozen Relationships

We now have a way of safely dealing with models without generating idle-in-transaction problems, but the code quickly becomes a mess if we have to deal with relationships: We need to freeze them separately and pass them along as if they aren’t related. This could be overcome by telling the freeze method to freeze all related objects, recursively walking the relationships.

We'll have to make some adjustments to our frozen proxy class as well.

Now, we can fetch, freeze, and use frozen objects with any preloaded relationships.

Additional recommendations and caveats

  • Don't call session.commit() inside an expunging context manager's block. In fact, avoid using session at all and use tx.session instead. The context manager will take care of flushing and committing the session when exited.
  • Avoid nested sessions inside the context block.
  • Try to use one single query inside a context manager. If you need multiple queries, it often makes sense to use separate context blocks for each one.
  • If you don't need to pass along an entire model object, you don't need to freeze it. Imagine that you only need an object's id or name attribute; you can simply store it in a variable while inside the expunging context block.

Avoid idle-in-transaction connection states to preserve DB resources

While the code to access the DB with SQLAlchemy may look simple and straightforward, one should always pay close attention to transaction management and the subtleties that arise from the various layers of the persistence stack.

We learned this the hard way, when our services eventually started to exhaust the DB resources many years into development.

If you recently decided to use a software stack similar to ours, you should consider writing your DB access code in such a way that it avoids idle-in-transaction issues, even from the first days of your project. The problem may not be obvious at the beginning, but it becomes painfully apparent as you scale.

If your project is mature and has been in development for years, you should  consider planning changes to your code to avoid or to minimize idle-in-transaction issues, while the situation is still under control. You can start writing new idle-in-transaction-proof code while planning to gradually update existing code, according to the capacity of your development team.

International SaaS Salary Calculator

How We Built an International SaaS Salary Calculator for Our Distributed Team

By Adeline Bodemer
5 min read.
0 min read . By Adeline Bodemer

Like any major topic in your company, your compensation policy should reflect your organizational values.

At Gorgias, we created a compensation calculator that reflected ours, setting salaries across the organization based on 3 key principles:

  1. Compensation should be based on data
  2. Compensation should reflect everyone’s ownership, meaning everyone should have equity
  3. Compensation should be transparent

Since the beginning, we applied the first two: Each of our employees was granted data-driven stock options that beat the market average.

However, we were challenged internally: Our team members asked how much they would make if they switched teams or if they got promoted.

This led to the implementation of our third key principle, as we shared the compensation calculator with everyone at Gorgias and beyond: See the calculator here.

This was not a small challenge. We’re sharing our process in hopes that we can help other companies arrive at equitable, transparent compensation practices.

We built our compensation calculator using four key indicators

First, let’s get back to how we built the tool. We had to decide which criteria we wanted to take into account. Based on research articles and benchmarks on what other companies did before, we decided that our compensation model would be based on 4 factors: position, level, location, and strategic orientation.

If we had to sum it up all briefly, our formula looks like this:

Average of Data (for the position at defined percentile & Level) x Location index

Salaries are based on four criteria: position, level, location, and strategic orientation.

Position

This is the job title someone has in the company. It looks simple, but it can be challenging to define! Even if the titles don’t really vary from one company to another, people might have different duties, deal with much bigger clients or have more technical responsibilities. Sometimes your job title or position doesn’t match the existing databases.

For some of these roles, when we thought that our team members were doing more than average in the market, we crossed some databases to get something closer to fairness.

Level

To assess a level we defined specific criteria in our growth plan for each job position. It is, of course, linked to seniority, but that is not the primary factor. When we hire someone, we evaluate their skills using specific challenges and case studies during our interview processes.

Depending on the databases you’ll find beginner, intermediate, expert, which we represent as   L1, L2, L3, etc.We decided to go with six levels from L1 to L6 for individual contributors and six levels in management from team lead to C-level executive.

Location

Our location index is based on the cost of living in a specific city (we rely on Numbeo for instance) and on the average salary for a position we hire (we use Glassdoor). Some cities are better providers of specific talents. By combining them, we get a more accurate location index.

When we are missing data for a specific city, we use the nearest one where we have data available.

Our reference is San Francisco, where the location index equals 1, meaning it’s basically the most expensive city in terms of hiring. For others, we have an index that can vary from 0.29 (Belgrade, Serbia) to 0.56 (Paris, France) to 0.65 (Toronto, Canada) etc. We now have 50+ locations in our salary calculator — a necessary consideration for our quickly growing, global team of full-time employees and contractors.

Strategic orientation

We rely on our strategic orientation to select which percentile we want to use in our databases. When we started Gorgias we were using the 50th percentile. As we grew (and raised funds), we wanted to be 100% sure that we were hiring the best people to build the best possible company.  

High quality talent can be expensive (but not as expensive as making the wrong hires)! Obviously, we can’t pay everyone at the top of the market and align with big players like Google, but we can do our best to get close.

Since having the best product is our priority we pay our engineering and product team at the 90th percentile, meaning their pay is in the top 10% of the industry. We pay other teams at the 60th percentile.

Some other companies take into account additional criteria, such as company seniority. We believe seniority should reflect in equity, rather than in salary. If you apply seniority in the company index on salaries, eventually some of your team members will be inconsistent with the market. Those employees may stay in your company only because they won’t be able to find the same salary elsewhere.

By crossing several databases, we arrived at a more accurate dataset

Data is at the heart of our company DNA.

Where should you find your data? Data is everywhere! What matters most is the quality.

We look for the most relevant data on the market. If the database is not robust enough, we look elsewhere. So far we have managed to rely on several of them: Opencomp, Optionimpact, Figures.hr, and Pave are some major datasets we use for compensation. We’re curious and always looking for more. We’ll soon dig into Carta, Eon, and Levels. The more data we get, the more confident we are about the offers we make to our team.

Once we have the data, we apply our location index. It applies to both salaries and equity.

To build our equity package, we use the compensation and we then apply a “team” multiplier and a “level” multiplier. Those multipliers rely on data, of course. We’re using the same databases mentioned above and also on Rewarding Talent documentation for Europe.

Internal communication is key

As we mentioned above, once our tool was robust enough, we shared it internally.

To be honest, checking and checking again took longer than expected. But we all agreed that we’d rather release it to good reactions than rush it and create fear. We postponed the release for one month to check and double-check the results..

For the most effective release, we decided to do two things:

  1. We shared it with one team at a time. This was done to anticipate the flow of questions, though we didn’t receive that many.
  2. We shared it with a lot of humility. . Even if we checked the data many times, we could have missed something, or there could have been something that lacked consistency. We asked everyone to stress-test it and to provide feedback.

Overall, the reactions have been great. People loved the transparency and we got solid feedback.

We released the new calculator in September 2021, and overall we’re really happy with the response. We also had positive feedback from the update this month.

Let’s see how it goes with time.

Next step: sharing it with the rest of the industry

Let’s be humble here: It’s only the beginning. It’s a Google Sheet. Of course, we’ll need to iterate on it. 

In the meantime, you can check out the calculator here.

So far we’ve made plans to review the whole grid every year. However, now that it’s public within the teams, we can collect feedback and potentially make some changes. Everyone can add comments as they notice potential issues.

The next step for us is to share it online with everyone, on our website, so that candidates can have a vision of what we offer. We hope we’ll attract more talent thanks to this level of transparency and the value of our compensation packages.

Engaging Employees in a Hybrid World

Gorgias’s Playbook for Engaging Employees in a Hybrid World

By Adeline Bodemer
6 min read.
0 min read . By Adeline Bodemer

I come from the world of physical retail where building a bond was more straightforward. We often celebrated wins with breakfast and champagne (yes, I’m French!) or by simply clapping our hands and making noise of joy.

We would also have lunch together every day, engaging in many informal discussions.

Of course, it bonded us! I knew my colleagues’ dog names and their plumber problems, and I felt really close to many of them.

Employee engagement is one of the primary drivers of productivity, work quality, and talent retention. When I joined Gorgias, where we have a globally distributed team, I wondered how you create the sense of belonging that drives that engagement

The ingredients for employee engagement

Like many companies now, our workforce is distributed. But at Gorgias, it’s a truly global affair: Our team lives in 17 countries, four continents, and many different time zones, which can be challenging.

And yet, I believe Gorgias culture is truly amazing and even better than the one I used to know.

I realize that we achieved that by relying on the critical ingredients of a strong relationship

  • Strong moments - Simply sharing coffee won’t take you very far in getting to know your colleagues. But creating some great moments together will bring you one step further.
  • Repetition - If you don’t nourish the relationship consistently, it may unravel with time. You won’t feel as connected as before.

By repeating these strong moments, you can make the connection between people stronger as well. The stronger the connection, the stronger the engagement.

Speaking of a strong engagement, Gorgias’s eNPS (employee Net Promoter Score) is 50. How is this possible? Well, what’s always quoted as one of our main strengths is the company culture, and how it connects our employees.

Let’s take it further by exploring five actionable steps we have taken to make that happen.

Organize virtual summits (quarterly) 

While some would push back against events like these falling under the purview of the People team, they are important for building strong culture, team cohesion, and employee happiness — all areas that are definitely part of our directive.

Here’s what you need to know to bring these summits to your organization.

What is the virtual summit?

As the name states, it’s a virtual event where the whole company connects.

It’s not mandatory, but it is highly recommended to attend because it’s fun and you learn many things.

It’s a mix of company updates, fun moments, and inspiring sessions. Each session is short, to let everyone the opportunity to breathe.

Typically we have three kinds of sessions:

  • Company updates range from intro sessions with the CEO and team lead presentations, to founder Q&As. During these sessions we have a short retro on the quarter to share strategic vision, which also provides an opportunity for the whole team to challenge the company leadership.
  • Fun moments include activities like scavenger hunts, quizzes, online escape games, and online musical activities.
  • Inspiring sessions covered topics including the benefits of a morning routine, and recruiting tips . These sessions help us to learn and grow, a top priority for our teammates.

Due to timezones, some sessions don’t include every country.

What are the key elements to make it work?

  • Teamwork: Pretty obvious, right? But a great team is key to making the virtual summit a success. Identify who can be the owner of this whole event. In our case, it was someone from the People team, our Office and Happiness manager.

  • Delegation: Get help from other teams to build the summit content. Having your team building that all alone would be overwhelming. Delegate! The customer success team can help you build the quiz: “How well do you know our customers?” for instance. The recruiting team can share how to be a good recruiter. And external vendors can help with specific games — we used virtual event contractors for the ones that would’ve been too cumbersome to build.

  • Tools: Look for a solid platform to rely on. We used to rely on Google Meet, but since we have a growing number of employees, we use Bevy to cater to our virtual event needs.

  • Content: A nice video at the beginning of the session as an ice breaker is always a good idea, and plus, it sets up the mood. The same goes for engaging slides. Even though we rarely use slide decks, dynamic slides are more effective than boring written docs for engaging 200 people for half-hour blocks. We share slides to present the company updates and the learning sessions.

  • Anticipation: I think we can all agree that last-minute organization doesn’t work. The more you anticipate, the less stressful it will be. And the bigger your company is, the more things you need to anticipate.

How much does it cost?

Our last virtual summit cost us roughly $13,000, which means $65 per head. Here’s the breakdown:

  • Content: $4,000
  • Speakers for learning sessions: $2,000
  • Games/animations: $5,000
  • Food: $2,000

What are the challenges?

The first thing you might already have in mind is: It takes time! And you’re right.

The more we grow, the more challenging it becomes to organize these events.

I believe we’ll eventually need to have a dedicated event manager for all of our physical and virtual events. I want to have them within my team, and I 100% believe it’s worth it.

Another challenge can be technical difficulties with your event software choice, so make sure that you find a reliable platform that suits your needs.

Allow in person gathering at the nearest hub (quarterly) 

Our team is a mix of hybrid and full-remote workers.

Since we don’t want the full-remote people to become disconnected, we highly encourage them to join the nearest hub once a quarter.

And when they do, we organize some happy hours, games or movie nights. Those face-to-face activities help create bonds between employees. It’s simple and doesn’t require a lot of organization, but it creates an incredible moment every time the remote teams join. We call them Gorgias Weeks.

Organize a company offsite (annually) Of course with the pandemic, that’s not an easy one.

We were fortunate to be able to organize our company offsite and gather a massive part of the crew together in October 2021. 

The pandemic created doubt and additional points of stress, but looking back I’m so glad we were able to create an opportunity for everyone to meet in person.

We asked everyone to bring a health pass — full vaccination or PCR test — and we picked a location that allowed for a lot of outdoor activities.

We made sure the agenda for the two days was not too busy. As with our virtual summit, it was a balance of company alignment, learning, and fun. We made sure people had enough free time to relax, talk to each other, play games, or play sports.

This company offsite is surely an essential and strong moment for us and it helps create strong bonds and great memories.

Encourage team offsites (annually)

We encourage every team to organize their own offsite for team-building purposes. Since people don’t meet a lot physically, having these once a year is great!

We let each team lead own it. They pick up the location and the agenda. Then, we provide guidelines with the budget.

Needless to say, it helps build stronger bonds and great memories.

Have informal fun moments (weekly) 

In my experience, it was quite tough to create those moments internally with the team. That’s why we decided to start our team meeting with a fun activity of 10-15 minutes, where we are able to share more than just work. 

Every week, there is a different meeting owner who has to come up with new fun activities and games. Starting the meeting with this kind of ice-breaking activity brings powerful energy, and people are more engaged and effective in the sessions. I would recommend it to everyone, especially to those who think, “We already have so many things to review in those weekly meetings, we don’t have time for that.” Try it once, you’ll see how the energy and productivity are different afterward. 

On top of that, I also believe tools that encourage colleagues to randomly meet together are great. On our side we use Donut. It gives a weekly reminder that encourages employees to make it to their meeting with a colleague.

Team cohesion and employee happiness are worthwhile investments

Overall, we’ve organized six virtual summits, four company retreats, three Gorgias weeks, and hundreds of virtual coffee and fun meetings. 

At the beginning there were only 30 people in the company — now there are 200 of them. As I mentioned, it’s becoming more and more challenging to organize these meetups, but it’s also the most exciting part: making sure the next summit is better than the previous one! 

Of course, I’m aware that employee fulfillment and connection  are not the only ingredients for retention. But they are key ingredients and shouldn’t be forgotten, especially as we all become more remote. 

It’s a worthy investment to organize these events and allocate resources to them, because it makes everyone at Gorgias feel included and connected. And I have no doubt, now, that it’s part of our responsibilities in People Ops.

Customer Service Twitter

10 Best Practices for Providing Exceptional Customer Service on Twitter

By Ryan Baum
8 min read.
0 min read . By Ryan Baum

When a customer's problem goes unanswered on Twitter, you lose that customer and possibly the audience of people who watched it happen. 

It’s hard to come back from that, which is why customer care is so important on social media platforms. In fact, Shopify found 57% of North American consumers are less likely to buy if they can’t reach customer support in the channel of their choice

Your customers want to talk to you — and you should want the same, before they head to a competitor. But first, you need to build a customer support presence on Twitter that lives up to your broader customer experience.

We've helped over 8,000 brands upgrade their customer support and seen the best and worst of social media interactions. Here are our top 10 battle-tested best practices for providing exceptional Twitter support.

1. Promptly and accurately respond to tweets

Prompt response time is one of the most important pillars of great customer service, and according to data from a survey conducted by Twitter, 75% of customers on Twitter expect fast responses to their direct messages. 

Of course, responding with accurate and helpful information is ultimately even more important than responding in real time, so be sure that you don't end up providing inaccurate information in a rush to reduce your response times. 

Promptly and accurately responding to customer service issues that are sent to your company's Twitter account is often easier said than done. To do both, you need an efficient system and a well-trained customer support team. 

This is where a helpdesk is critical, to bring your Twitter conversations into a central feed with all your other tickets. 

tweets in a helpdesk feed

If you’re trying to manage Twitter natively in a browser, or through copy-paste discussions with your social media manager, you’re not going to see the first-response times you need to succeed. 

As data from Twitter's survey shows, speed is a necessity in order to meet customer expectations and provide a positive experience.

2. Move conversations out of the public space

There may be instances where customers contact your Twitter support account via a mention in a tweet as opposed to a direct message. In fact, one in every four customers on Twitter will tweet publicly at brands in the hopes of getting a faster response according to data from Twitter. In these instances, it is important to move the conversation out of the public space as soon as possible by moving the conversation to the DMs.

There are a couple of reasons you would want to avoid resolving customer service issues on a public forum. For one, keeping customer service conversations private allows you to maintain better control over your brand voice and image since customer service conversations can often get a little messy and may not be something you want to broadcast to your entire audience. 

Moving conversations out of the public space also enables you to collect more personal data from the customer such as their phone number or other contact information, details about their order and their credit card information without having to worry about privacy concerns.

In Gorgias, you can set up an auto-reply rule that responds to public support questions and directs them to send a DM for further help. This can ensure that people feel heard immediately, even if it takes a while for your team to get to their DM.

3. Don’t get into emotional arguments

Regardless of whether you are discussing an issue with a customer via your Twitter account or any other medium, it is never a good idea for your reps to get into arguments with the customer. 

Social media platforms such as Twitter tend to have a much more informal feel than other contact methods, and they also tend to sometimes bring out the worst in the people who hide behind the anonymity that they provide. You may end up finding that customers who contact you via Twitter are sometimes a little more argumentative than customers who contact you via more formal channels. 

Nevertheless, it is essential for your Twitter support reps to maintain professionalism and avoid engaging in emotional arguments with customers. It may even help to establish guidelines for your team, to help deal with this type of customer tweet. You can include rules on emoji use, helpful quick-response scripts, and whatever other priorities you have.

Recommended reading: How to respond to angry customers

4. Have a direct way for your support agent to reply to tweets

It is certainly possible to use Twitter alone when providing customer support via the platform. However, this isn't always the most efficient way to go about it. 

Keep in mind that, like other social networks, Twitter wasn't necessarily designed to be a customer support channel. There aren't a lot of Twitter features beyond basic notifications that will be able to help your team organize support tickets. 

Thankfully, there are third-party solutions that you can use that allow your support agents to respond to tweets and Twitter direct messages from your company website in a way that is much more organized and efficient. At Gorgias, for example, we offer a Twitter integration that will automatically create support tickets anytime someone mentions your brand, replies to your brand's tweets, or direct messages your brand. (By the way, we also offer integrations for Facebook Messenger and WhatsApp.)

Agents can then respond to these messages and mentions directly from the Gorgias platform, where they will show up in the same dashboard as the tickets from your other support channels. 

This integration makes Twitter customer support far more efficient for your team and is one of the most effective ways to take your Twitter customer support services to the next level.

reply to tweets within your helpdesk

5. Always respond to feedback (even if it’s negative)

It is always important to respond to all questions and feedback that customers provide via Twitter, even if that feedback is negative. This is an important part of relationship marketing.

Many brands shy away from responding to negative feedback on public forums for fear of drawing more attention to the issue. However, this doesn't usually have the desired effect. Failing to respond to negative feedback can make it seem to anyone who happens to see the tweet in question that your brand is dodging the issue. 

While you may wish to move the conversation out of the public space as soon as possible, you should always provide a public response to public feedback — negative or not. 

For examples of brands effectively responding to negative tweets, check out this article.

6. Be as personable as possible

According to data from Forbes, 86% of customers say that they would rather speak with a real human being than a chatbot. Even if you don't rely on chatbots for providing customer support, though, your customers may not be able to tell the difference unless you train your reps to be as personable as possible. 

When your reps tailor their responses and connect on a personal level, it provides a much more positive support experience that provides a halo effect to your brand. Customers will remember that the next time they arrive at the checkout button, and they might even be open to upsell opportunities at that very moment.

7. Create a tracking strategy for brand mentions

Small businesses may not struggle to keep up with brand mentions, given that there are less to track. For larger companies, though, keeping up with brand mentions can often be a difficult task. This is especially true when some users tag brands with hashtags instead of handles.

This makes it important to create an effective strategy for tracking brand mentions in an efficient and organized manner. One of the best ways to go about this is to utilize integrations that will create a support ticket anytime a customer mentions your brand in a tweet. You can even create custom views in Gorgias to centralize all of these mentions.

By tracking these brand mentions, you can also retweet positive posts for brand awareness.

Brand mentions view in Gorgias

8. Create guidelines to explain which issues you support via Twitter

Not every customer service issue can be handled via Twitter. If there are certain types of issues that fall into that category for your brand, it's a good idea to keep your customers in the loop by providing concise FAQ guidelines that explain which issues you do and don't support via Twitter. 

These guidelines can come in the form of a pinned Tweet at the top of your Twitter support account or an off-Twitter link that you provide to customers when they contact you on Twitter with an issue that requires a different medium for resolution. You could even have a visual you add when you respond to questions that don’t fit your guidelines. 

Simply responding to customers and requesting that they direct message you for further assistance is another option for addressing issues that you don't want to handle on Twitter. If you set up the auto-reply we mentioned in the second tip, above, it could even include a link to these guidelines.

Check out what this brand did when contacted on Twitter with a problem that needed to be taken off-platform in order to be resolved.

9. Consider having multiple Twitter handles for sales, marketing, and customer support

If it makes sense for your brand, it may be a good idea to create multiple Twitter handles that are designated for sales, marketing, and customer support. Creating multiple Twitter handles that serve different purposes allows you to better organize your direct messages and mentions by breaking them down into different categories. 

Having a designated customer support Twitter account can also better encourage customers to contact you via Twitter with their customer support issues since it reassures them that this is the purpose that the account serves. 

But even then, some customers will still tweet at your main account with issues. When this happens, you can use intent and sentiment analysis in Gorgias to automatically route those issues to the correct agent or team.

detect the intent behind tweets with Gorgias

10. Understand the full context of every Twitter interaction

When a customer takes the time to reach out to you on Twitter, whether it’s via direct message or a mention, it’s likely not the first time that customer has interacted with your brand. 

If you respond on Twitter, you can see the direct message history on that platform, but that’s where the context ends. With Gorgias’s Twitter integration, you can see the full customer journey, including all social media engagement, support tickets across all of your channels and even past orders.

This context is crucial to understanding the conversation you’re walking into, so you can deal with the situation appropriately. If the person is a long-time customer who engages frequently, you’re going to treat that conversation differently than that of a customer who bashes you on social networks and returns products frequently.

Break down your Twitter customer service silo

Any customer support you provide through Twitter will make things more convenient and accessible for your audience. 

But to make the experience faster and more pleasant on both sides of the conversation, you should consider handling all of your social media customer support in one platform, alongside all your other tickets. 

Gorgias ties social handles to customer profiles from your Shopify, BigCommerce or Magento store, uniting relevant conversations from across all of your support channels. All of that info is automatically pulled into your response scripts, and you can even automate the process for no-touch ticket resolution.

Check out our social media features to learn more.

Building delightful customer interactions starts in your inbox

Registered! Get excited, some awesome content is on the way! 📨
Oops! Something went wrong while submitting the form.
A hand holds an envelope that has a webpage coming out of it next to stars and other webpages