Artificial Intelligence and Machine Learning (AI and ML) technologies have come a long way since its first inception. Who would have thought that we would have a working model of actual computer-based assistants that can do things like manage our schedules? Who would have thought that we could even use these assistants to manage our homes? These things can even be used to diagnose cancer patients, something impossible without doctors even five years ago.
Amazon Web Services (AWS) is at the forefront of AI and ML technology. As one of the world’s largest technology innovators, they would naturally be at an advantage to feed enough data to the technology and accelerate their development. Because they are also one of the largest technology firms any man has ever seen, they are also at an advantage in placing AI and ML in places and applications we may never have imagined.
Linguistics is one segment that has benefitted greatly from technologies today. Linguistics, if you think about it is also one of the most complex things that us humans can create and understand. The context of it and interpretation can be affected by plenty of things too. Linguistics is affected by area, culture, community, heritage, and even lineage.
For example, there are differences between French spoken in France and Canada. There are even subtle differences between French spoken in France and Monaco, or even Switzerland. The most common language of all, English has differences even in spelling and context in Britain, the Americas, and even Australia. English spoken today is also a distinct form of the language that was spoken 50 years ago.
The progression of technology in linguistics have progressed through years and years of feeding all these data into it. That has allowed us to communicate with global communities with more ease than peeling an orange. AWS has taken it a little further than that though. They have gone beyond spoken or written languages. Through something called AWS DeepLens, they have developed translation algorithms to sign languages.
While that technology might sound like it is as simple as gesture controls, it is plenty more than that. Yes, it is technically gesture control and recognition. But it is way larger and more complex than just a solution for end-point devices. The trick is to teach the native algorithm to recognise all the available sign words and even alphabets. The AWS DeepLens Community projects so far has learnt to recognise most of the alphabets in the American Sign Language.
But technology also goes beyond just recognising alphabets to understanding proper words with the algorithm in Amazon Alexa. It is not just about communicating with your friends anymore. It is about using the platform as a home assistant tool, a customer service tool, a command center, and user defined PC experience that mimics voice control and command for us. Instead of using voice though, its all in the gestures.
The tool they use is called Amazon Transcribe. It works just like any transcribe apps you can find in the market. It supports up to 31 languages currently with more being added by time. It even supports ASL as a component to create text from sign language.
Simple communication is just the beginning for the technology though. AI and ML still has a long way to go even in the medical field. Just like the human race, the technology gets better everyday though. If you really think about it, the technology is not that new in the first place. We have embarked on the journey of having machine built and defined assistants since we started developing computers to help us with simple and complex mathematical problems.
It is just that simple mathematical problem solver has become something much bigger today. Who would have thought that we would let computers fly a commercial airplane? Who would have thought that cars can drive themselves today? Who would have thought that we could hire a private translator without spending any money or any time? You just have to look into your pocket.