Flixu
For Translation Agencies & LSPs

Your translators are spending just as long post-editing bad MT as they would translating from scratch.

Client-specific brand voice. Shared Translation Memory across your whole freelance network. LQA scores you can put in a client report. Flixu is built for agencies that need AI to actually reduce the review load — not add a new one.

What is Flixu for Agencies?

Flixu gives translation agencies a workspace where each client has their own Translation Memory, Glossary, and Brand Voice configuration — isolated from every other client. AI translation runs with those constraints already loaded, so the output your translators review reflects the client's voice and approved terminology from the first draft. LQA scores every segment automatically, giving you auditable quality data you can deliver alongside the translation.

The Problem

The post-editing math doesn't work when the MT output is stylistically wrong.

Two years into adding an AI translation layer, a lot of agencies are looking at the same result: their translators are spending just as long on post-editing as they used to spend on first drafts. The AI is technically correct and stylistically wrong — and fixing stylistically wrong output for a specific client, with a specific brand voice, turns out to be almost as slow as translating from the beginning.

The issue isn't the AI model. It's that generic MT doesn't know Client A's voice versus Client B's voice. It doesn't know that Client A's technical documentation always uses formal register, or that Client B's marketing copy uses a specific set of approved brand terms that must appear verbatim. Without that context loaded before translation, the output needs to be corrected after — and that correction is what's eating the efficiency gain.

This is the pattern that shows up in the community:

"Post-editing MT takes just as long as translating from scratch if the output is bad."

"We need client-specific brand voice, not a one-size-fits-all AI engine."

"Our translators are frustrated because MT output is syntactically correct but stylistically wrong."

The answer isn't a better AI model in isolation. It's loading the right context before the model generates anything.

Terminology consistency across multiple freelancers is an operational problem without a systematic solution.

When a client updates their terminology — a product name changes, a legal term gets refined, a new approved phrase is added to the style guide — the update needs to propagate to every freelancer working on every active project for that client, across every language. Today, that means an email and a hope.

The freelancer on page 40 of a dense technical manual who got the email update on Thursday afternoon may or may not have applied it to the work they submitted Friday morning. The PM finds out when the client flags it in delivery review. By that point, the correction is a revision cycle rather than a prevention.

Glossary Enforcement in Flixu loads approved terms as constraints before the translation request reaches the language model. The term isn't a suggestion the model can override when the context gets ambiguous — it's specified in the payload. When the glossary is updated, every subsequent translation for that client applies the updated constraint. The PM doesn't chase translators by email; the system applies the change.

→ How glossary constraints work: Glossary Enforcement

Client QA reports require data that manual workflows don't produce.

Clients in legal, medical, and compliance verticals are asking questions that agencies running generic MT post-editing workflows can't answer: What quality score did this translation receive? Which segments were reviewed and approved by a human? Was the glossary consistently applied? Can you show me a record of that?

Without a structured QA pipeline, the answer is "we reviewed it internally" — which is not an answer that wins enterprise procurement pitches or satisfies regulated-industry compliance requirements. The agencies losing those pitches to competitors aren't losing on translation quality; they're losing on the inability to demonstrate it.

Flixu's LQA scores every segment across five dimensions — Grammar, Accuracy, Terminology Consistency, Formatting, and Fluency — automatically, on every project. The scores are logged alongside each segment with the decision trail. That data exists without a PM manually generating a QA report. For clients who need it, it's a deliverable.

"We lost a pitch because we couldn't demonstrate auditability in our QA process."

→ LQA and quality reporting: LQA & Quality Assurance

When client context lives in a PDF briefing, it doesn't survive team changes.

The brand voice for Client A exists as a 12-page PDF that was written eighteen months ago, delivered to the three translators who were onboarded at the time, and has not been consistently applied to every new project since. When one of those translators leaves and a new one joins, the briefing happens informally — or not at all.

The result is that the client's translation quality is a function of institutional knowledge that doesn't transfer reliably. The agency looks inconsistent. The client notices over time even if they don't articulate it directly.

Brand Voice Manager in Flixu stores the client's tone configuration in their workspace profile. Every translation request for that client applies the same configuration automatically — regardless of which translator is running the project, regardless of when the project started. New translators don't need a briefing session to produce brand-consistent output; the configuration does it.

→ Brand voice configuration: Brand Voice Manager

How Flixu fits an agency workflow.

The operational model doesn't require replacing your existing translator relationships or your project management process. Flixu sits between the client context and the translation output.

1

Set up a client profile

When a new client is onboarded, create a workspace profile for them: upload their Translation Memory as a TMX file (from any TMS — Trados, memoQ, Phrase, Crowdin), upload their glossary as a CSV, and configure their brand voice. From that point, every translation for that client applies those settings automatically.

2

Run AI translation with client context loaded

When a project file is uploaded — .docx, XLIFF, .po, .yaml, .strings, Markdown — the translation runs with the client's Glossary, Brand Voice, and Translation Memory already active. The output your translators receive reflects the client's approved terminology and voice from the first sentence.

3

Review by exception

LQA scores every segment. Segments above 90 are auto-approved; segments below appear in the review queue with the specific failing dimension flagged. Translators spend their time on the work that genuinely needs human judgment — not on confirming that correctly translated segments are correct.

4

Client corrections improve the system

When translators correct an output segment and approve the revision, the correction feeds back into the client's Translation Memory through the Post-Edit Learning Loop. Flixu gets better for that client with each project — the quality of first-draft output improves, and the post-editing time decreases over time.

What's different about how Flixu handles agency workflows.

Most MT tools offer a glossary feature. The operational difference is when and how the glossary is applied.

Standard MT with a glossary: the model translates the string, then a find-and-replace script substitutes the approved term. The grammar was built around a different word; the substitution produces an awkward construction in inflected languages.

Flixu: the glossary term is loaded as a payload constraint before the model generates text. The model builds the surrounding grammar around the fixed term from the start. The output reads naturally because the constraint was present before generation, not applied after.

The same logic applies to Translation Memory. Standard TM retrieves character-similar strings and substitutes them. Flixu's Semantic Reranker identifies conceptually similar past translations and uses them as style references — the model generates output that reflects your approved style, rather than copying text that happened to look similar.

The combination is what makes post-editing time go down instead of stay the same: the output is stylistically correct before the translator sees it, not just technically correct.

Frequently Asked Questions

We integrated DeepL two years ago and our post-editing time hasn't decreased. Why would Flixu be different?

+

The most common reason post-editing stays high with generic MT is that the output is stylistically wrong for the specific client — technically accurate, wrong tone, wrong register, wrong terminology. Flixu loads client-specific Brand Voice, Glossary, and Translation Memory before the model generates anything. The first draft reflects the client's approved style, not the statistical average of the language. That's what reduces the post-editing load — not a better model, but the right context loaded before generation.

How does Flixu keep client Translation Memories and glossaries isolated from each other?

+

Each client operates in an isolated workspace profile with their own Translation Memory, Glossary, and Brand Voice configuration. The Semantic Reranker only searches within the assigned client's TM — a term approved for Client A is not retrievable in Client B's translation context. There's no cross-contamination by design.

Can multiple freelancers share a Translation Memory in real time?

+

Yes. When any team member confirms a translation in a client workspace, the approved phrasing is immediately added to the shared Translation Memory. The next translator working on the same client's content benefits from that approval — whether it was confirmed five minutes ago or five months ago.

Can we import our existing Translation Memory from Trados, memoQ, or Phrase?

+

Yes. Export your TM as a .tmx file from any TMX-compatible platform and import it directly into the client profile in Flixu. Your historical approved translations are available as semantic style references from the first translation run. Glossaries import via CSV.

How does the LQA reporting work for client deliverables?

+

Every segment receives an LQA score across five dimensions automatically. The scores are logged with the segment, the decision trail (auto-approved via threshold or reviewed by a human), and the timestamp. For clients who require quality documentation — compliance contexts, regulated industries — that data is accessible from the workspace without a separate reporting step.

What file formats does Flixu support for agency workflows?

+

.docx, XLIFF, .po, .yaml, iOS .strings, JSON, Markdown, and subtitle files. All structural elements — keys, variables, tags, formatting — are preserved exactly in the output file. For agencies working with Adobe InDesign IDML files specifically, IDML is not currently in Flixu's supported format list — check the documentation for the current format support before onboarding clients whose primary deliverable format is IDML.

How does pricing work for agencies managing multiple clients?

+

Flixu prices on translation volume — words translated. There's no per-seat licensing that increases as you add translators or PMs to the workspace. Adding a reviewer, a project manager, or a regional marketer doesn't change the invoice. The bill reflects how much you translated, not how many people touched the project.

Set up your first client profile and run a test project.

Upload your TM, configure the glossary, and translate a sample document. Compare the output with what your current MT pipeline produces on the same content.

Related Features