Microsoft today announced that Microsoft Translator service will use the Z-code Mixture of Experts (MoE) models to significantly improve the quality of translation models. In MoE, the models learn to translate between multiple languages at the same time.
Microsoft Translator used NVIDIA GPUs and Triton Inference Server to deploy and scale these models efficiently for high-performance inference. Even though we have seen other companies using Mixture of Experts models in the past, Microsoft is the first company to use this technology in production for customers.
According to Microsoft’s, the new Z-code-MoE systems outperformed individual bilingual systems by 4% on average. The new models improved Japanese to English by 7.6 percent, English to Arabic by 9.3 percent, and English to Slovenian by 15 percent.
“This opens the way to high quality machine translation beyond the high-resource languages and improves the quality of low-resource languages that lack significant training data. This approach can provide a positive impact on AI fairness, since both high-resource and low-resource languages see improvements,” wrote Microsoft Research team.
Availability details:
Z-code models are available now by invitation for Document Translation customers. And these will be made available to all customers and to other Translator products in phases.