Google Translate has been a popular online translation tool since its launch in 2006. A native iOS app, launched in 2011, has expanded its use cases, providing immediate translation services for the mobile user.
The conceptual model of the service is the human translator, especially given the convenience of the app — access is quick and the results in real time. The features of the app are what you would expect from a professional translator: on-the-spot interpretation of any form of communication.
Once the app is opened, the affordances are easily discoverable. There center empty space has an explicit call-to-action “Enter text,” signifying where the user can enter the words they need translated. The two languages listed on top are prominent in size and color (blue), with arrows signifying the act of translation. The inclusion of two arrows might cause some confusion, however. The app is relying on the cultural constraint of our convention to read from left to right, but it may suggest a different direction in other cultures. It might not also be immediately clear that the languages can be changed by tapping on them; this perceived affordance gets lost in design and could benefit from more signifiers.
Once the text gets entered, the app translates it in real time, providing simultaneous results (feedback), just as a human translator would. The results are so quick, and the action is so intuitive, it feels like a harmonious convergence of the user’s conceptual model of translation services and the app’s design model, aided by the system image (the interface design).
Once the text gets translated, an arrow icon pops up. The perceived affordance that it’s tappable is clear, with the expectation to learn more about that text, and the subsequent display of further information feels like a logical mapping. Different alternative translations are provided, along with different definitions of the word. This is when the user consciously reflects on their goal: is the translated text the correct definition? Are other translations better suited? Since this is the processing level where a user decides whether or not they would recommend a product, the app aids them with their evaluation.
The app also accounts for small human errors, such as typos — the iOS itself will autocorrect (one form of constraint), but even with a misplaced letter, the app still recognizes the likely text input. This is especially important for foreign languages; a user who is attempting to translate a different language may get the spelling correct. The app politely suggests “Did you mean” with suggestions. Since humans make errors all the time, it’s reassuring to see that the app design takes this into account.
History & Saved
Another great functionality of the app is the tracking of your text inputs — this history pops up at the bottom whenever you return to the homepage, and it shows up as suggestions when you enter a new input that might match a previous one. Users are also able to manually save their translated text inputs and retrieve them later. Since humans have limited short-term memory, having this history (i.e. memory) tracked for you is a great convenience, and all the better when it’s seamlessly integrated into the design.
Conversation & Transcription
Google Translate also provides real-time translation and transcription of voice conversations. This feature, again, follows the user’s conceptual model of translation services. The action is seamless and instantaneous, automatically transcribing and translating it into the target language. There’s no way to save the transcription, however, thus demanding more cognitive effort on the user to remember what’s being said as they read the text. One could take a screenshot, but that would be a workaround.
Lastly, the camera translation feature helps optimize the planning and execution steps of entering text. When a user encounters written text (in a manual, on a label, on a street sign, etc.), they could just point the camera at it and the app translates it on the spot, superimposing the translated text directly in the camera view. This feature satisfies both on the visceral and behavioral levels of processing, providing instant reward/feedback, along with the sense of astonishment when seeing the image transform in real time with the translated text.