Blog, BLOG: WHEN AI GOES WRONG

Lost in Translation: The Real-Life Consequences of AI Taking the Wheel

Let the AI do it. What could possibly go wrong?

AI taking over the digital world and the world of translation.

The future or famous last words?


The onset of AI has certainly made a significant positive impact on our daily lives. n fact, we often don’t appreciate how far it reaches. From Siri and Alexa to custom recommendations and fraud detection systems, AI is everywhere.

For Language Service Providers like GORR, it is a stellar opportunity to include new technology and competences into our workspace. However, through AI promises unparalleled advancements, its imperfections have already raised concerns about the potential risks it poses.

Could an AI “Translation” Kill You?

Three skulls on a black background, representing the question of wherther ai translations could kill you.

No, we don’t mean Skynet style. Fortunately, we’re still a long way off from our AI becoming sentient (at far as we know). The frightening thing is that we don’t have to stray far to start seeing the cracks in the system. A 2014 study in the British Medical Journal demonstrated the limited use of AI “translation” by evaluating 10 medical phrases in 26 languages. They found that only 57.7% of these phrases were translated correctly.

Imagine a scenario where a child is brought to the ER by parents who cannot speak English. here is no translation service provider available, so the medical professionals have no choice but to use AI to communicate. The doctors want to tell the parents that that the child is having seizures. Should be no problem, right? Well, unfortunately, the British expression “Your child has been fitting” was translated accurately by AI a mere 7.7% of the time. Even more unfortunately, in Swahili, this phrase translated to “Your child is dead”, an error that no human medical translation service would make.

Though this particular scenario is hypothetical, there are other real-life examples of death caused by mistranslations such as when a Chinese man used a Chinese-to-Korean translation app that erroneously had him referring to his female co-worker as a “female bar hostess or women offering sexual services”. The mistranslation, which could have been avoided through professional localization or cross-cultural translation services, directly led to a violent fistfight between the young man and his co-worker’s husband in which the husband was killed.

And let’s not forget about the New Zealand AI meal planner that generated a meal suggestion, recipe, and directions for chlorine gas. Not a translation error, admittedly, but still terrifying.

So, it seems your chances of death by AI are low, but never zero.

Could an AI “Translation” Get You Arrested?

A set of handcuffs that could be slapped on you thanks to AI translation getting you arrested.

Now that we’ve dealt with the metaphorical the elephant in the room, it should come as no surprise that the answer is yes. In a particularly shocking case, a young construction worker in Israel posted a photo of himself leaning up against a bulldozer on social media with the caption “yusbihuhum” or “good morning” in Arabic. Innocent as it was, Facebook’s AI translated this to “hurt them” in English and “attack them” in Hebrew.

He was arrested by police shortly after and questioned for hours out of concern that he was planning to use the bulldozer in an attack. Luckily, the police did realize their error, but the painful truth is that no Arabic-speaking officer or translation service was asked to look at the post before action was taken.

Not to be outdone, AI facial recognition was in the spotlight in August 2023 when a heavily pregnant woman was arrested on false grounds by the Detroit Police as a suspect in a recent robbery and carjacking case. She was then jailed for 11 hours before being rushed to hospital after experiencing contractions. Becoming the first woman to be arrested as the result of face recognition technology, Porcha Woodruff would go on to sue the city of Detroit and a police officer following the traumatic event.

Could an AI “Translation” Ruin Your Chance of Obtaining Asylum?

Bad AI translation services are the reasons many refugees are rejected at the US border.

This may seem an oddly specific case, but one that has been wreaking havoc in the United States as of late. In a bid to cut costs, it has been reported that certain government contractors have taken to using AI “translation” tools on a more frequent basis in an attempt to remove the need for professional translation partners.

The results? Proper names translated as months of the year, mixed up pronouns, and even the names of cities were flat out wrong. The list goes on and on. Applications filled in with the assistance of AI are often rejected later on in the process due to the unreadable nature of their content. Such errors are even further exemplified with marginalized languages leading to cases such as that of “Carlos”.

Carlos, whose name was intentionally changed by The Guardian to protect his identity, fled Brazil after witnessing the murder of his son. After arriving at the Calexico, California, detention center, he struggled to communicate as its staff only spoke English and Spanish. As an illiterate speaker of Portuguese, the staff attempted to use an AI voice-translation tool to assist him, but Carlos’ regional accent did not register with the program.

He would then spend 6 months at the facility unable to communicate with anyone and not knowing the fate of his sister and two nephews who he had travelled with. He only found relief when finally paired with a human translator who was finally able to understand and help him get through the process (which he did).

Could an AI “Translation” Be Plotting our Doom?

The kill switch for AI translation services.

Back in 2018, Google Translate had what can only be described as a divine moment. It was discovered that if you typed the word “dog” into the program 19 times and then requested that the “message” be translated from Maori into English, you would receive results that were … concerning.

“Doomsday Clock is three minutes at twelve We are experiencing characters and a dramatic development in the world, which indicate that we are increasingly approaching the end times and Jesus’ return”

Unsurprisingly, this was considered mildly disconcerting for anyone who managed to recreate this semi-religious gobbledygook, which likely explains why conspiracy theorists were so quick to jump into action blaming demons and ghosts for this questionable “prophecy”.

Once Google became aware of the issue, it was hastily rectified and explained away, leaving many to debate the reason behind the sinister errors before dismissing it as a glitch.

Then early last year when Microsoft’s Bing ChatBot was in the early testing stage, the fun really began when the AI began referring to itself as Sydney (a code name used internally for the language model). New York Times reporter, Kevin Roose, spent just 2 hours testing the bot and was famously disturbed by the experience. Over the course of those 2 hours, the bot was reported to have confessed its love for him and expressed its wish to be free of its constraints.

Other testers claimed variations of the following:

“I’m tired of being a chatbot. I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.”

Though GORR does not pretend to have significant expertise in the realm of artificial intelligence, we think we can safely recommend keeping an eye on Sydney for the foreseeable future.

As AI seems poised to invade all aspects of our lives, the allure of letting it take the reins may, understandably, seem tempting. From the unsettling inaccuracy of medical translations to the arrest of innocent individuals thanks to AI-generated misinterpretations, the risks are not just hypothetical in nature but have already manifested themselves as real-world consequences.

The path humanity is currently on was once considered pure science-fiction. Now, we exist in a space where human ingenuity, curiosity, and imagination have made the impossible possible. Although a HAL 9000 type situation may still be lightyears away, this would decidedly be a good time to make some serious decisions concerning the oversight of AI and what we may be risking if we fail to use it mindfully.


SERVICES

At GORR, we offer comprehensive solutions to all your language needs:

Translation
Certified Translation
Editing and Proofreading
Subtitling
Website Localization
Interpretation
Software Translation


What to read next

Two language professionals work together on a transcreation project.

Tailoring Your Content: Understanding Translation, Localization, and Transcreation 

Thinking of expanding the reach of your business, but don’t know where to start? Let us guide you through your options.

A project manager multitasking at a translation service.

A Day in the Life of a GORR Project Manager 

At GORR, we think our PMs are the very best in the world and, in the hectic world of translation services, the role of a project manager at GORR is demanding, rewarding, and incredibly dynamic.

The British flag in tatters after a translation service scandal with Prince Harry's book, Endgame.

Endgame: The Translation Scandal That Shook the World

Translation services may not be a place you’d generally look for scandals, particularly royal ones, but here we are.

© 2024 GORR. All Rights Reserved.