High-confidence translations ship automatically. Exceptions go to review.
When a translated segment scores above 90 on the LQA scale, or matches existing Translation Memory at 99% or higher, Flixu approves and deploys it automatically — no human step required. Only the segments that fall below that threshold appear in the review queue, with the specific failing dimension flagged.
Auto-Approval Workflows apply two rule-based checks to every translated segment: a Translation Memory match at 99% or higher, or an overall LQA score above 90. Segments that meet either threshold are approved and deployed without human review. Segments that don't are routed to the review queue with the specific failing dimension marked. The result is a review process that scales with quality, not with volume.
How Auto-Approval works.
The core problem auto-approval solves: Manual review distributes equally across all translated content regardless of quality. A project with 500 segments and 12 errors requires reading all 500 to find the 12. As translation volume grows, the review queue grows proportionally — localization becomes a release bottleneck not because the translations are bad, but because verification assumes they might be.
Auto-Approval inverts this. The system determines what gets reviewed, not the reviewer.
Translation runs with pre-loaded constraints
The translation request runs with the Glossary, Brand Voice, and Translation Memory already loaded. The pre-translation context layer applies your approved terminology and tone before the language model generates output. This reduces the number of segments that will fail the quality check — quality is built in before scoring begins.
LQA scores every segment
Every translated segment receives a score across five dimensions:
| Dimension | What it evaluates |
|---|---|
| Grammar | Grammatical correctness in the target language |
| Accuracy | Meaning preservation — does the translation say what the source said? |
| Terminology Consistency | Approved glossary terms present and correctly applied |
| Formatting | Tags, placeholders, and variables intact and symmetrical with the source |
| Fluency | Natural reading quality in the target language |
The combined score is the LQA result for that segment.
Routing by threshold
Two conditions trigger auto-approval independently:
LQA score ≥ 90/100 → auto-approved, deployed immediately
TM match ≥ 99% → auto-approved, deployed immediately
Either condition is sufficient. A segment doesn't need both — a near-exact TM match bypasses LQA scoring entirely, and a high LQA score approves independently of TM overlap.
Exceptions routed to review
Segments that meet neither threshold appear in the review queue. Each flagged segment shows the specific dimension that caused the failure — a Terminology Consistency flag means a glossary term is missing or incorrect; a Formatting flag means a variable or tag wasn't preserved. The reviewer knows what to look at before reading the segment.
Audit log
Every auto-approved segment is logged with its LQA score, the approval rule triggered (LQA threshold or TM match), and the timestamp. The audit trail is accessible to Admins and Project Managers in the workspace without a separate logging system.
Why auto-approval requires deterministic output.
Auto-approval is only reliable when the translation pipeline produces consistent, predictable output — not stochastic generation that might include added commentary, altered placeholders, or invented content.
The translation pipeline uses Qwen and DeepInfra model routing
specifically configured for translation tasks. These models operate
within strict constraints: no added sentences, no reformatted
structure, no altered variables. A {username} placeholder
in the source appears as {username} in every target
language — not {benutzer} or omitted entirely. A tag
structure <strong>text</strong> is preserved
as <strong>translated text</strong>.
This is what Deterministic AI and Zero Hallucinations means in practice — and it's the prerequisite for trusting an automated approval decision. If the pipeline could add or remove content unpredictably, automated approval would be unsafe. Because the output is constrained to the translation task, the LQA scores are meaningful signals.
→ How the translation pipeline is constrained: The Context Engine
How to set up Auto-Approval for your project.
For teams setting this up for the first time, the configuration takes under ten minutes.
Configure your LQA threshold
In project settings, set the LQA auto-approval threshold. The default is 90/100. Teams with stricter quality requirements can raise it; teams processing high-volume, lower-stakes content can lower it. The threshold applies to all segments in that project.
Load your Glossary and Translation Memory
The pre-translation constraints are what make auto-approval reliable. A project with an empty Glossary will route more segments to review than a project where the Glossary has been populated and the TM has accumulated approvals. Before enabling auto-approval at scale, seed the Glossary with approved terminology and, if available, import existing Translation Memory as a TMX file.
Enable auto-approval for the project
Toggle auto-approval on at the project level. From that point forward, every translation run applies the scoring and routing logic. The first few projects at a new threshold are useful calibration: check what percentage of segments are routing to review and whether the flagged segments are genuinely problematic. Adjust the threshold if needed.
Monitor the review queue
The review queue shows flagged segments with the failing dimension marked. Reviewing patterns across several batches helps identify whether consistent flags indicate a Glossary gap, a Brand Voice configuration issue, or a content type that consistently produces lower scores. Addressing those upstream — updating the Glossary, refining the Brand Voice profile — reduces the ongoing review load.
When auto-approval changes the release economics.
Mobile app teams shipping daily content updates
A mobile app that updates in-app copy, push notification text, or onboarding strings daily generates dozens to hundreds of translated segments with each release. Running a manual review cycle for each update adds hours between the English version and the localized version going live. With auto-approval configured, standard content — greetings, CTAs, status messages — ships immediately. Unusual or complex strings surface for review. Time-to-market for localized content shrinks from hours to minutes.
→ Mobile app localization workflows: Flixu for Mobile Apps
SaaS teams with weekly product releases
A product releasing features weekly needs localization to run alongside the development cycle, not after it. Teams that previously allocated two to three hours per sprint to localization review — or delayed the international release by a sprint — typically find that auto-approval reduces that overhead to under 30 minutes. The review queue contains only the edge cases; standard UI strings, help text, and feature descriptions auto-approve and go live with the English release.
→ SaaS localization workflow: Flixu for SaaS Teams
Language service providers scaling without adding reviewers
An LSP handling increasing translation volume for multiple clients faces a choice: add reviewers proportionally, or find a way to make each reviewer's time go further. Auto-approval lets a reviewer focus on segments that genuinely need expert judgment — the ones below the quality threshold. The segments above it don't require review; they require only the audit log confirmation that the system provided. Review capacity scales with quality complexity, not with raw volume.
→ Agency workflows: Flixu for Agencies
Frequently Asked Questions
What are the two conditions that trigger auto-approval?
+
A Translation Memory match at 99% or higher, or an LQA score above 90/100. Either condition is sufficient — both don't need to be met. A near-exact TM match is approved immediately regardless of the LQA score. A high LQA score approves independently of whether the string has appeared in the TM before.
Can I configure the LQA threshold for my project?
+
Yes. The threshold is configurable in project settings. The default is 90/100. Raise it for content where quality requirements are stricter — regulated documentation, customer-facing legal language. Lower it for high-volume content where a slightly higher flag rate is acceptable. The threshold is a project-level setting, not a global account setting.
What happens to the segments that don't auto-approve?
+
They appear in the review queue with the specific failing dimension marked: Grammar, Accuracy, Terminology Consistency, Formatting, or Fluency. The reviewer can see which dimension triggered the flag before reading the segment — they don't need to re-evaluate the full output from scratch. Most flagged segments require a targeted correction, not a full retranslation.
Is there an audit trail for auto-approved translations?
+
Yes. Every auto-approved segment is logged with its LQA score, the threshold rule that triggered approval (LQA score or TM match), and the timestamp. Admins and Project Managers can access this log from the workspace. For teams in regulated industries where translation approval records are required, the audit log provides the documented decision trail.
What prevents a low-quality translation from being auto-approved?
+
The LQA scoring. A segment with a broken placeholder, a mistranslated term, or an accuracy problem will score below 90 on the relevant dimension and fail auto-approval. The scoring runs on every segment — there's no sampling or batch-level check. The pre-translation constraints (Glossary and Brand Voice loaded before the model generates output) also reduce the frequency of low-scoring segments by preventing the most common error types before they occur.
How is Flixu's auto-approval different from what Transifex or Lokalise offer?
+
Transifex's automations and Lokalise's workflow rules apply rule-based routing based on match percentages and user-defined conditions. Flixu's auto-approval is based on the LQA score — a five-dimension quality measurement that evaluates grammar, accuracy, terminology, formatting, and fluency independently. This means the routing decision reflects a quality assessment, not just a similarity score. A segment can match 85% of a previous translation but still fail auto-approval because it introduced a terminology error — something a match-percentage rule wouldn't catch.
Run your first project with auto-approval enabled.
Set your LQA threshold, load your Glossary, and see what percentage of segments auto-approve on the first run. Most projects see auto-approval rates above 80% for standard content after the Glossary is seeded.
Related Features
- LQA & Quality Assurance — The scoring that determines auto-approval routing
- Translation Memory — 99% TM match as one of the two auto-approval triggers
- GitHub Integration — Automated pipeline where auto-approval enables zero-touch deployment
- The Context Engine — Pre-translation constraints that reduce flagged segments
- Team Collaboration — Review queue access by role