The results are clear for Zendesk. Drain mentions a case in which a localization department approached her for help with improving their translation quality. They, too, used an MT engine, but the results were far from satisfactory. Apart from other factors, such as post-editing and customer feedback, what was made clear is that the MT engine used by Drain and her team was better trained.
But these aspects shouldn’t be taken in isolation from each other. Training an MT engine is a continuous process, and the more high-quality input is provided, the better the results are. Having a good customer feedback loop means that Zendesk is able to target pages that require human intervention. Post-editors contribute to the training process by correcting errors, and the results are fed back into the system. This in turn helps make the content of machine-translated pages better.
It’s a continuous cycle of improvement on all fronts, one which Zendesk seems to have mastered.
The last point is simple, although quite easy to overlook, so it needs venezuela mobile database to be said again: Today, machine translation is capable of generating usable content.
It’s a common refrain, that’s true, but machine translation engines really have come a long way. The days when MT was a novelty with amusing errors is long gone; the quality of machine translations has gone up enough to be useful in industrial and commercial contexts today.
The key difference between then and now boils down to one thing: neural machine translation (NMT). Neural machine translation is a form of MT that uses neural networks, which can process massive amounts of translation data with relative efficiency, increasing the quality of translations exponentially. Today, all MT engines use neural machine translation.