How ARKit 3 and CoreML 3 may radically transform language skills
We’re inexorably heading closer to universal translation becoming a completely normal part of daily life, with Apple tech used at schools across Europe to help teachers and students speak to each other.
Augmenting the humans
A story on Apple’s website discusses numerous examples in which its technologies are being used to bridge communication gaps, but it is also important to note that new machine learning APIs inside iOS 13 mean developers will soon be able to offer even more ways to translate between languages.
This is because CoreML 3 offers support for over 100 advanced machine learning models that should make it much easier to build apps capable of hearing, understanding and translating speech, all with natural language.
Apple has already confirmed iOS 13 will provide developers with tools that support on-device speech recognition for 10 languages.
This isn’t dumb recognition, as these machine learning models also understand something called speech saliency – which means the AI can detect data concerning pronunciation, streaming confidence, utterance detection, and acoustic features.
And this means these translators will be able to provide a contextual understanding of what is being said, as well as translation. That’s important as most speech is nuanced.
Where will the puck be?
With this in mind it is interesting to consider how AI may impact European schools already working with Apple’s products.
In Wales at a school called St. Cyres School (above video), English language students working with iPad increased their grades by an average of 3.8 points during the year — outperforming peers who speak English or Welsh as their native language for the third year in a row.
Apple tells us that one-in-five students at Wilhelm Ferdinand Schussler Day School in Dusseldorf, Germany, speak German as a second language. Teachers are helping students learn German at their own pace using an iPad provided as part of the school’s 1;1 iPad scheme. This scheme means 100 percent of people taking part in it now graduate, up 20 percent.
This is all highly interesting, but with fantastic language learning apps like Duolingo already available and Apple’s continued focus on inventing speech-based user interfaces, a future in which we use AI and AR to learn languages at our own pace and in our own way is becoming closer every day.
How we may learn
See it this way:
Imagine an interactive language learning app that uses the built-in CoreML imaging intelligence to identify ordinary objects around you.
You would point your iPad’s camera around the room you are in and lables would appear showing you the name of that object in your native language, in the target language, and with an instruction you could select to pick up more information about those items.
[amazon_link asins=’B07NSSBFCS’ template=’ProductCarousel’ store=’9to5ma-20′ marketplace=’US’ link_id=’a0e24cca-13fe-4e03-8c1a-0726f63649d1′]
Tap this button and you’d hear the name and pronunciation of the object and then potentially be taken through interactive scenes in which that object is used, giving the language learning app a chance to teach you verbs, adjectives and all the associated language you might need to handle real situations.
Or, alternatively, AR and CoreML speech could combine to create a translation system – this could even work with sign language: Listen to a conversation with your device and watch the translator sign it to you on the screen, or hear a near-simultaneous translation on AirPods.
Some of these ideas are already emerging – take a look at Mondly.Ar, for example.
How do you think these technologies will help unite communities and bring the world together?