Author: Luis Damian Moreno Garcia
Nowadays, AI translation is giving way to more and more cases of “vibe translating”. While LLMs are incredibly powerful, relying on them too heavily seems to be creating a paradox that many humans are currently overlooking:
The more we depend on AI for translation, the less equipped we may become to properly judge whether any given translation is accurate
For example, you may now use a LLM to translate a legal document (we may call this Vibe Legal Translation). The output looks polished and professional, but what is the only way to definitively ascertain its accuracy? For the time being, humans with very specialised knowledge, which requires a lengthy “training” process from their part.
This increasing over-reliance on AI for translation purposes creates a dangerous feedback loop. The less we engage with the process of translation ourselves, the more we lose the ability to critically evaluate the output.
This is a very relevant consideration for professionals, students, companies and governments alike. Translation isn’t just about swapping words between languages according to probabilities; it’s about understanding context, culture, and intent, and sometimes it is about localising, adapting or rewriting content altogether. If we stop developing these skills, we’re left blindly trusting machines that, while impressive, are currently far from perfect.
I see this akin to never removing the training wheels from your bike. Later on, you may be even believe that you know how to ride a bike. This is but an illusion. You do not actually know how to ride a bike. As soon as the training wheels are gone, you are bound to fall.
The solution here seems to be:
- Attain knowledge and skills first, learn how to do a certain task by yourself.
- Then, use AI as a tool, not a crutch.
This, to me, seems the only way to properly leverage AI for any task, including translation. Otherwise, you would be essentially trusting a black box for something as important as communication.
Leave a comment