Impact of Machine Translation on Korean Translation Efficiency and Quality

The quality of machine translation has been improving steadily for a long time. Many in the translation industry are concerned about the impact on the livelihoods of professional translators. Cheap post-edit workflows using machine translation are becoming more and more prevalent. I’ve also been trying for several years to get ahead of the curve and integrate machine translation into my own work. My goal is to improve my efficiency and quality in the process.

machine-translation

Google Translate

My first recent attempt was to connect Google Translate into my workflow on a large project. I alternated between using and not using the Google Translate suggestions. I would translate without machine translation for an hour or so. Then I would spending awhile letting Google Translate provide a first draft of each segment, which I would then edit to a higher level. Since this was a premium Korean-to-English workflow, I also went back over each segment again to polish the translation up. I timed myself to see how much faster I was translating with machine translation than without it. Surprisingly, I found there was almost no difference in my overall speed. I also felt less confident of my final translation quality.

Slate Desktop

I later tried leveraging some old TMs (“translation memories”; click here for a glossary of translation terminology) to create custom MT engines with Slate Desktop. Slate is a machine translation software installed locally on my computer. However, after a little effort, the creator of the program gave me some advice. He told me that to get useful support from the software for my language pair, I would need more content. He also said the content would need to be more specialized and I would have to clean it. I have tried in the past to categorize my TM content ways that might be conducive to such an effort.

However, I work on a wide range of subject matters. This means I only end up with small volumes of content in granular groupings. I don’t know how I would generate enough specialized content to train an MT engine that would be useful on an actual project. Even if I could organize my content adequately for quality, what if I don’t get enough (or any) projects that match all that effort? I understand that the MT engine training process is highly topic specific. I gave up when I concluded that the effort would be unlikely to pay off for me.

Korean machine translation services

I’ve kept my eye on the technology though. When a couple Korean portals recently came out with their own machine translation engines for Korean and English, I tried some comparisons. I was really impressed with both Naver Papago and Kakao MT. Comparing their output, I found they are both better than Google Translate when translating Korean to English. This inspired me. Using the Inten.to plugin with memoQ (my translation-environment tool), I connected Kakao MT into my translation tool and did some testing. (Naver Papago is not as easily connected through its API and I didn’t test it in this experiment. However, I’ve already determined through web-interface testing that Naver Papago and Kakao MT are pretty close in quality.)

Unfortunately, even though the new engines are better than Google Translate, none of them are good enough. They don’t actually improve my work efficiency unless I compromise the final quality. This is because coming up with a first draft isn’t hard anyway. I can do that on my own without the MT. I can, and do, do this pretty quickly on every project. The hard part is getting the translation to final polished form. In the process of doing the first draft myself, I become familiar with the text. This way, I can focus straight away on the final editing during my second pass.

But I find that the machine-translated draft just gets in the way of understanding the text on my first draft. This means I’m not as ready for the final polishing on the second draft. The fact that the MT output doesn’t match my writing style may help me come up with a wider variety of expressions in my work. But it also seems to create thought dissonance. I end up spending an inordinate amount of time rearranging words when I’m trying to produce the final draft of my translation. The best I can tell, MT really is only suited for raw output or post-edit work. “Post-edit” refers to a cut-rate approach to translation. In this process, a human edits machine translation output, but doesn’t aim for a polished final version. Instead, “post-edit” is focused on being fast, cheap, and “good-enough.”

My Korean hybrid (human + machine) translation service

I have even developed an MT-based service with my so-called hybrid translation approach. It is a step up from a standard “post-edit” workflow. So far though, I’m not getting much traction from it. For the reasons described above, the translations I commit to delivering with the hybrid service are cheaper. But they are not as good as the translation work I aim for under my premium translation service. Perhaps I’ll find out how to make the hybrid translation a viable service eventually. But then I’ll be competing with all the other translators doing so-called post-edit work. I’m not sure I have a competitive advantage in this area.

Understanding the limitations of machine translation

Getting from 70% to 100%

I’m reaching the conclusion that machine translation (and, by extension, new translators entering the translation business) isn’t necessarily a big threat (at least for the time-being) on difficult Korean-to-English content. This is the kind of work I specialize in. Getting a difficult translation from 0% (nothing) to 70% (mediocre) is easy; any beginner translator or MT program can do that. The hard part is getting it from 70% (mediocre) to 100% (perfect, well, almost perfect). An MT program in the hands of a mediocre translator still results in mediocre output. I also don’t see MT output getting to 100% anytime soon. Improvements have probably already plateaued under the current technology.

Value, as always, is at the top of the market

The subject of machine translation has been on my mind for a long time. In fact, I’ve written about a variety of machine translation workflows in Korean. But I can’t seem to find a way to include machine translation in my workflow as more than just a quick vocabulary check (which I do with the clever little app called GT4T).

In conclusion, machine translation may improve, from its current level of, say, 70%, up to 80%. The market could flood with new translators under of the bad economy or other reasons. But if none of these resources does a great job on mission-critical translation work in specialized fields, the top of the market remains unaffected. Industry veterans with long experience, know-how, and subject-matter expertise, may not need to be greatly concerned. I’ve been working hard to improve and market myself in my Korean translation specializations (finance, business, etc.) for several years now. I’m finding that I’m being entrusted with harder work than ever. With specialization comes proficiency, and my speed hasn’t dropped. This is especially true because I am aggressively testing and using the available technology that really does improve my efficiency. Unfortunately, machine translation is still not a big part of my toolbox.

Steven Bammel

Steven S. Bammel is president and chief translator/consultant at Korean Consulting & Translation Service, Inc. A graduate of the University of Texas at Arlington (B.B.A. Economics) and Hanyang University (M.S. Management Strategy), Steven has worked for over twenty years in Korean business and translation. | more about Steven

You may also like...