Every segment scored. Only the exceptions reviewed.
Flixu's LQA layer scores every translated segment across five quality dimensions automatically. Segments that clear the threshold are approved without a human review step. Segments that don't are routed to a reviewer with the specific failing dimension flagged. You read what needs reading — not everything by default.
Linguistic Quality Assurance (LQA) in Flixu is an automated scoring layer that evaluates every translated segment across five dimensions: grammar, accuracy, terminology consistency, formatting, and fluency. Segments that score above the threshold — or match your Translation Memory at 99% — are auto-approved. Segments below the threshold are flagged with the specific failing dimension and routed for human review.
How the LQA pipeline works.
In a standard localization workflow, quality assurance works like this: translation runs, a human reviewer reads every segment, corrects what's wrong, and approves what's right. The review distributes equally across all content — the five obvious errors and the 995 perfectly translated strings receive the same attention. Flixu's LQA changes that distribution. The scoring runs automatically after translation and before any human reviewer sees the output.
Translation with pre-loaded constraints
The translation request runs with the glossary, brand voice, and Translation Memory already loaded as constraints. The context layer is applied before generation — this reduces the number of segments that will fail LQA because the constraints that govern quality were already present when the content was generated.
→ How the pre-translation analysis works: The Context Engine
Automated LQA scoring across five dimensions
Every translated segment receives a score across five dimensions:
| Dimension | What it evaluates |
|---|---|
| Grammar | Grammatical correctness in the target language |
| Accuracy | Meaning preservation — does the translation convey what the source said? |
| Terminology Consistency | Approved glossary terms are present and correctly applied |
| Formatting | Tags, placeholders, and variables are intact and symmetrical with the source |
| Fluency | Natural reading quality in the target language |
Each dimension contributes to an overall segment score. The combined score determines routing.
Routing by threshold
Segments that score above the configured threshold, or that match existing Translation Memory at 99% or higher, are auto-approved without human review. Segments below the threshold are routed to the review queue — not the entire document, just the specific segments that need attention, with the failing dimension marked.
→ Auto-approval workflow: Auto-Approval Workflows
What the LQA layer catches.
Terminology consistency
Every translated segment is cross-referenced against your active glossary. If an approved term appears in the source and the translation doesn't use the approved target-language equivalent, the segment is flagged under the Terminology Consistency dimension. Reviewers see exactly which term deviated and can correct it directly.
Formatting and variable integrity
Tags, HTML elements, bracketed variables, and placeholders in the source are verified against the translated output. A missing {username} variable, an unclosed HTML tag, or a corrupted placeholder causes a Formatting flag — preventing broken strings from reaching your application.
Accuracy and meaning preservation
The Accuracy dimension evaluates whether the translated segment preserves the meaning of the source. This catches the edge case where a translation is grammatically correct but has shifted meaning — paraphrased a fact incorrectly, dropped a qualifying clause, or introduced a negation that wasn't in the source.
Fluency scoring
Fluency evaluates whether the output reads naturally in the target language — not just whether it's correct, but whether it sounds like something written in that language rather than translated into it. Low fluency scores often indicate awkward constructions or literal word order carried over from the source language.
When LQA scoring changes the workflow.
Agencies managing high-volume client projects
Translation agencies reviewing client output manually pay for the review regardless of how much of it needs correction. A project where 95% of segments translate correctly still requires reading all of them to find the 5% that don't. LQA changes the economics: the review effort concentrates on the segments that failed the quality check, not the full volume.
For agencies translating at scale, that routing difference compounds across clients. Teams using LQA-based review workflows typically see QA cycle time drop significantly — review time concentrates on exceptions, not total output.
→ Agency workflows: Flixu for Agencies
Enterprise teams with compliance documentation
For content where terminology accuracy carries compliance weight — medical device documentation, legal contracts, regulatory filings — the Terminology Consistency dimension provides a systematic check on every segment, not a sampling-based review. Every term deviation surfaces, regardless of document length or reviewer attention level.
→ Legal and compliance localization: Legal Compliance Use Case
SaaS teams shipping release notes and UI updates
For development teams where localization runs alongside product releases, manual review of every translated string adds latency to the release cycle. With auto-approval configured, segments that score above 90 or match TM at 99% ship without a review step. Only the edge cases — new terminology, unusual constructions, low-confidence segments — require human attention. The release doesn't wait for a reviewer to read everything; it waits for the small set of flagged segments to be addressed.
→ SaaS localization workflow: Flixu for SaaS Teams
Frequently Asked Questions
What are the five LQA dimensions?
+
Grammar, Accuracy (meaning preservation), Terminology Consistency, Formatting (tags and placeholders), and Fluency. Each segment receives an evaluation across all five dimensions. The combined score determines whether the segment is auto-approved or routed for human review. Reviewers see which specific dimension triggered the flag — they don't need to re-evaluate the entire segment from scratch.
What score threshold triggers auto-approval?
+
Segments with an overall LQA score above 90, or segments that match existing Translation Memory at 99% or higher, are auto-approved without requiring human review. The threshold is configurable in project settings — teams with stricter quality requirements can raise it; teams processing high-volume low-stakes content can lower it.
Does LQA replace human review entirely?
+
No — and it's not designed to. LQA routes review to where it's needed, it doesn't eliminate the need for human judgment on complex, ambiguous, or culturally sensitive content. The practical effect is that reviewers spend their time on the segments that genuinely require their expertise, not on confirming that correctly translated segments are correct.
How does LQA differ from Translation Memory matching?
+
Translation Memory matching evaluates how similar a new source string is to previously approved translations — it's a measure of overlap with past content. LQA evaluates the quality of the freshly generated translation across five dimensions — it's a measure of output quality independent of whether the content has been translated before. Both feed into the auto-approval decision: a 99% TM match or an LQA score above 90 triggers auto-approval.
Does LQA work for all supported file formats?
+
Yes. LQA runs after translation, regardless of the source file format. Whether the content came from a .docx, an XLIFF file, iOS .strings, .po, JSON, or Markdown, the LQA scoring evaluates the translated segments against the same five dimensions. The Formatting dimension specifically checks that the structural elements of each file type — variables, tags, placeholders — survived the translation intact.
Can I see LQA scores for completed projects to track quality trends?
+
Each translated segment carries its LQA score, and flagged segments are visible in the review queue with the specific failing dimension marked. For project-level quality visibility — tracking which content types or language pairs produce more flagged segments over time — that data is accessible within the project workspace.
Run your first project with automated quality scoring.
Upload your content, configure your glossary, and see which segments clear the LQA threshold automatically — and which ones need your attention.
Related Features
- Auto-Approval Workflows — How LQA scores trigger automatic approval
- Glossary Enforcement — Terminology constraints that feed the Terminology Consistency dimension
- Translation Memory — How TM matching combines with LQA for auto-approval
- The Context Engine — Pre-translation constraints that reduce LQA flags
- Team Collaboration — How review queues work with team roles