Smart QA
Overview
Front’s Smart QA (quality assurance) feature accelerates time-consuming manual QA processes with AI, giving support leaders a complete view of agent performance, without the need for manual ticket-by-ticket reviews. Configure your scorecard to automatically score the selected criteria, and display them in the plugin panel for email conversations.
Leverage Smart QA to improve team performance and ensure a consistently high-quality customer experience. AI-generated information for agent performance is provided as guidance for your own decision-making.
Please note the following:
Please rely on your own assessment and analysis or that of experts for significant decisions.
Please inform your agents or other End Users when you are applying Smart QA to their conversations.
Check out our Front Academy course here to learn more about leveraging value from Smart QA.
How it works
The Smart QA workflow in Front is as follows:
Set up scorecard criteria in the Front AI tab in workspace settings
Create a rule using the Review conversations with Smart QA rule template
Select conversation type(s)
Select shared inboxes
Select time period to trigger rule
Review QA results in conversation plugin panel or analytics
Scoring criteria
AI automatically scores conversations on the following criteria:
Communication
Brevity
Conversation opening
Empathy
Friendliness
Grammar & spelling
Information gathering
Personalization
Professionalism
Readability
Tone
Solution
Comprehension
Solution offered
Proactivity
Adaptability
Demo offered
Upsell
Rating scales
Smart QA has two rating scales to score criteria, a range (1-5) and binary (0 or 1).
On a range scale, criteria scores are normalized to a 0-1 scale as follows:
1 → 0
2 → 0.25
3 → 0.5
4 → 0.75
5 → 1
On a binary scale, criteria are scored either 0 or 1.
Score calculation
In the Smart QA analytics report, agents will have an average score for each criteria on your scorecard. N/A values are not factored into any scoring.
Example score calculation:
When looking at the Tone score for an agent, they received a 1/5 on the first conversation and a 4/5 on a second conversation.
The two scores are aggregated together. The formula used is: 0% (1/5) + 75% (4/5) → (0%+75%) / 2 = 37.5% → 38% by rounding.
The 38% score is shown in the Tone field in the analytics report for those two conversations.
Setting up scorecard criteria and rules
Step 1
Click the gear icon, then navigate to your workspace settings.
Step 2
Click Front AI in the left sidebar, then select the Smart QA tab at the top.
Step 3
Click Start setup. You’ll see a list of criteria you can add to your scorecard. Select the ones you’d like to add, then click Continue.
Step 4
Fill in the following fields to create your Smart QA rule:
Inboxes with Smart QA enabled: Select the shared inboxes you want Smart QA to apply to
Conversation type: Select the conversation types you want Smart QA to apply to
Review delay: Enter the time Smart QA should wait before reviewing a conversation after it has been archived or resolved
Step 5
Click Create to finish. When QA is completed on a conversation, you’ll see the action in the conversation’s activity history.
Editing scorecard criteria or rules
Step 1
Click the gear icon, then navigate to your workspace settings.
Step 2
Click Front AI in the left sidebar, then select the Smart QA tab at the top.
Step 3
Click Manage rules to edit your Smart QA rule, click Add criteria to add criteria to your scorecard, or click an existing criteria to edit it.
Step 4
If editing a Smart QA rule, you can adjust fields like the channel types and inbox(es) this rule should apply to, the time period for the trigger, and add any additional conditions as needed.
⚠️ Important: We recommend enabling the Close conversations setting for the selected inboxes. See this article to learn how the Close conversations setting affects AI features.
Click Save when finished.
Step 5
If adding new scorecard criteria, you’ll see a pop-up with a list of criteria you can add to your scorecard. Select the one you’d like to add.
Step 6
Refine the criteria fields as needed, then click Save when finished. You can return to this page at any time to edit or archive the criteria.
Reviewing QA results
Sidebar plugin
Once Smart QA is completed on a conversation, you can access the results using the Smart QA integration in the plugin panel in the right-hand sidebar.
If you don’t see the QA integration, see this article to review how to pin integrations to your sidebar.
Analytics
In Front Analytics, admins can navigate to the Smart QA report to review team performance.
Use the dropdown menu to select which criteria to include in the report, and scroll to the right to view the rest of the table.
Select an aggregate score for a teammate to view the conversations used to calculate their results for the selected timeframe.
Use the export icon to download the aggregate scores per criteria shown in the current table view.
Select a conversation to view the QA results.
Manually overriding AI scores
Admins can manually override AI scores from both the conversation sidebar plugin and the Smart QA analytics report.
Step 1
From the conversation: Navigate to the Smart QA plugin for any conversation with an AI score.
From Analytics: In the Smart QA report, click the aggregate score for a criteria. In the sidebar panel, select a conversation.
In this example, we’ll click Edit from the conversation plugin.
Step 2
Select a scoring icon to adjust the criteria score. You can also add a comment or revise the description using the text box.
Click Save & mark as reviewed to save your changes.
Step 3
The admin's name will replace "Generated by AI" in the reviewer field. Updated scores will appear in the Smart QA report within 1 hour.
Click Edit to continue adjusting scores at any time.
FAQ
What channels are supported?
This feature is available for the following channel types: Email, Front Chat, Portal, SMS, Slack, WhatsApp (native), WhatsApp (via Twilio), Yalo WhatsApp.
Which languages are supported?
Only English is officially supported at this time. While it is possible to use this feature with other languages, unexpected results may occur.
Are both traditional and ticket statuses supported?
Yes. Both traditional statuses (Open/Snoozed/Archived) and ticket statuses (Open/Waiting/Resolved) are supported.
Which conversations does Smart QA review?
Only conversations in shared inboxes will be reviewed. Additionally, a conversation review requires at least one inbound and one outbound message and the conversation must be marked as Resolved or Archived. Admins can also set additional conditions during the rule setup.
Can I exclude certain conversations from being scored?
Yes. You can create multiple rules per workspace to trigger the QA evaluation, so you can choose which inboxes receive a scorecard or not. If these kinds of conversations are not separated by inbox, admins can manually override scorecards for those conversations.
Which agent is reviewed in a conversation with multiple agents involved?
The only agent evaluated is the last person to send an outbound before the conversation is marked as Resolved or Archived.
Who can see the QA results?
Admins with analytics permissions can see all QA results in the Smart QA analytics report and the conversation plugin panel.
Agents can only see their own QA results in analytics and the plugin panel.
How should I interpret the QA score?
Smart QA provides an overview of team members' interactions with customers that you can see both at a glance alongside conversations or in aggregate via analytics. These scores help provide a standard measurement you can use to compare results over time or across teams.
What does the N/A score mean?
An N/A score can mean:
Smart QA didn’t detect content that meets the requirements for this criteria.
In the analytics dashboard, N/A will show for Criteria not present on the scorecard at the time of evaluation.
Can I adjust how the AI scores?
AI scoring is not customizable. e.g. Be more or less harsh. However, admins can manually override AI scoring.
Can I create multiple scorecards?
Yes. You can set up one scorecard per workspace.
How accurate can we expect Smart QA to be on our data?
When it comes to factual inaccuracies, we see a general trend of 99% accuracy to the facts in a conversation being evaluated and have mitigated 99% of hallucinations.
The majority of Smart QA criteria are based on industry definitions and best practices, so their evaluations may vary slightly from individual or company assessments. In several instances we offer variations for the most subjective criteria to better align with a specific application of that criteria.
What happens to my data? Which providers does Front use?
See this article for additional AI FAQs.
Pricing
Billing information
The Smart QA add-on is included with the latest Enterprise plan at no additional cost.
For the latest Starter/Professional plans and legacy Growth/Scale plans:
The add-on is $20/seat/month and added to your invoice based on your billing cycle.
To purchase the Smart QA add-on, navigate to your billing settings page to activate it.
The add-on will remain on your subscription and will auto-renew unless updated within the Upcoming plan at renewal tab in your billing settings. To learn more about plan changes related to add-ons, see this article.
Start a free trial
To try the Smart QA add-on for free, click Start free trial when hovering over the rule template. This will start a 30-day trial.

















