|Index of Articles by K. Ö.||Koral Özgül's Website||SDL Trados Home|
THINKING IN ENGLISH, SPEAKING LOCALIZED
GENERIC PROBLEMS IN TURKISH LOCALIZATION
RELATED TO THE SOURCE TEXT STRUCTURE
by Koral Özgül
Also published in ProZ.com.
In spite of the fact that software and services often target a multilingual customer profile in today's global market, the approach to the "product" and related documentation often remains monolingual by design. However, it's doubtful whether just a final adaptation attempt per merely localizing the textual material would be able to render the product handy (in sense of "ergonomics") for the non-English speaker user. The comprehensibility of a structure involving a language probably begins with the very architecture of it.
This approach may seem a bit to the extreme, but at least, it's sure that a certain degree of versatility would enable to aim a product that can be (almost) as utilizable as the "original".
If language is conceived as being beyond culture and mentality like mathematics or pure technique, the resulting product would most probably suffer from remaining "foreign" to the targeted end-users, if not with more serious bugs that cripple the functionality of the product (which is not rarely the case either).
European languages are word-based. That is, the sentences are built accumulatively, adding words (including prepositions) one by one, which have independent meanings each for their own right. The words don't undergo any significant changes or flexions when taking part in phrases or sentences. Words have quite an autonomy in Indo-European language sphere.
I'd compare this analogically to the beads on an abacus.
However, Turkish is a so called "inflectional language". That is, every word is subject to certain complex changes when participating in a phrase or sentence. Inflectional endings are appended to the base words, according to the words they follow and precede, according to their addressing/addressees, according to their assignment and function in the sentence. Other endings are appended to the endings and so on. Thus, the whole phrase/sentence is an organic entity. A single word hardly can remain untouched and unmodified within the whole*.
Words and other means of language are treated like another pinch of clay, added to the bigger lump to form a statuette: The sentence (as a supple but inseparable whole).
The inflectional endings also obey a series of vocal conformity rules. Thus, the very same meaning must be expressed with different phonemes, according to the word they are appended to.
Definition from Britannica.com
agglutination: a grammatical process in which words are composed of a sequence of morphemes (word elements), each of which represents not more than a single grammatical category. This term is traditionally employed in the typological classification of languages. Turkish, Finnish, and Japanese are among the languages that form words by agglutination.
OR WAY OF THINKING AND CONCEIVING
These two totally different approaches in these cultural/lingual spheres have both their pros and cons, have also probable consequences and effects in socio-cultural behavior of folks. The important thing for people dealing with multilingual products is being aware of these differences and keeping this fact in mind when designing and producing. Or the product will maybe only look like multilingual at the first sight... until you try to read and follow it.
Would you buy and use such a product? Would you maintain your productivity if you did?
I heard many translators stating that context is everything in translation. This may sound exaggerated. But at least, context is more than just an auxiliary reference. It virtually determines the resulting translation. Thus, a glossary of terms alone is often not much of use. In many cases, it may become a hindrance rather than an aid. Otherwise, machine translation would be far easily developed to perfection long ago, and translators would become superfluous. But this also means that a translator desperately depends on context. (Or he/she simply will make (many and serious) mistakes.)
Due to the radically different syntax rules of Turkish, split segments present serious problems in localization for the Turkish language.
The text in source material is often structured with a too much "English language oriented" attitude, so to say. This is understandable, yet presents serious localization problems at times, given the source files are not prepared and structured considering that they ARE supposed to be localized into OTHER languages indeed.
Moreover, this issue causes the Trados/SDLX memory units to become virtually impractical for further matches, even if in case of seemingly 100% matches. This is due to the radical syntax differences between the source and target languages.
Let me represent the problem with an imaginary example to make it more conceivable for the Indo-European lingual sphere:
As you see, the subparts of the sentence in individual segments form in no way a correct match (in neither of the segments). Consequently, these 3 segments are practically useless for further memory matches.
Koral Özgül, Istanbul, June 2007